CN113269737B - Fundus retina artery and vein vessel diameter calculation method and system - Google Patents

Fundus retina artery and vein vessel diameter calculation method and system Download PDF

Info

Publication number
CN113269737B
CN113269737B CN202110536793.9A CN202110536793A CN113269737B CN 113269737 B CN113269737 B CN 113269737B CN 202110536793 A CN202110536793 A CN 202110536793A CN 113269737 B CN113269737 B CN 113269737B
Authority
CN
China
Prior art keywords
model
arteriovenous
blood vessel
fundus
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110536793.9A
Other languages
Chinese (zh)
Other versions
CN113269737A (en
Inventor
郭佩宏
张大磊
祖建
胡娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Airdoc Technology Co Ltd
Original Assignee
Beijing Airdoc Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Airdoc Technology Co Ltd filed Critical Beijing Airdoc Technology Co Ltd
Priority to CN202110536793.9A priority Critical patent/CN113269737B/en
Publication of CN113269737A publication Critical patent/CN113269737A/en
Application granted granted Critical
Publication of CN113269737B publication Critical patent/CN113269737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The invention discloses a method and a system for calculating the artery and vein diameters of fundus retina, which preprocess color fundus retina images; constructing an iterative semi-supervised learning-based fundus retina arteriovenous semantic segmentation model, and performing arteriovenous segmentation on the preprocessed image to obtain an arteriovenous segmentation model; preprocessing retina images in a SUSTech-SYSU data set and an ORIGA data set, and constructing a fundus optic disc detection positioning model based on YOLO v 3; and calculating the number of pixels intersected between the vertical line of the artery and vein blood vessel and the blood vessel based on the artery and vein segmentation model and the fundus optic disk detection positioning model to obtain the artery and vein blood vessel diameter, and calculating the blood vessel diameter by adopting the artery and vein diameter ratio. The invention not only can realize accurate retinal artery and vein vessel segmentation and optic disk detection and positioning, but also can effectively, accurately and automatically realize the calculation of the retinal vessel diameter of the fundus.

Description

Fundus retina artery and vein vessel diameter calculation method and system
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a system for calculating the diameters of arterial and venous blood vessels of fundus retina.
Background
At present, the acquisition of parameters of the diameters and the diameter ratios of the arterial and venous vessels of the fundus is mostly from the manual operation of an expert or the semi-automatic software operation of arterial and venous vessel extraction, the manual operation of selecting vessel sections and then the calculation of the diameters and the diameter ratios, and the calculation process is low in efficiency and has high requirements on the professional literacy of the calculator, so that the automatic retinal arterial and venous vessel extraction of the fundus, the automatic positioning of the calculation area of the diameters of the vessels to be measured and the calculation of the diameters and the diameter ratios of the arterial and venous vessels are very meaningful researches.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method and a system for calculating the diameters of the arterial and venous vessels of the fundus retina to solve the problem of low efficiency of acquiring the diameter parameters of the arterial and venous vessels of the retina at present.
The invention adopts the following technical scheme:
a calculation method for the artery and vein diameters of fundus retina comprises the following steps:
s1, preprocessing a color fundus retina image;
s2, constructing an iterative semi-supervised learning-based fundus retina arteriovenous semantic segmentation model, and performing arteriovenous segmentation on the image preprocessed in the step S1 to obtain an arteriovenous segmentation model;
s3, preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set, and constructing a fundus optic disc detection positioning model based on YOLO v 3;
and S4, calculating the number of pixels intersected between the perpendicular lines of the arterial and venous blood vessels and the blood vessels based on the arterial and venous segmentation model obtained in the step S2 and the fundus optic disk detection positioning model constructed in the step S3, obtaining the arterial and venous blood vessel diameter, and calculating the blood vessel diameter by adopting the arterial and venous diameter ratio.
Specifically, the pretreatment in step S1 specifically includes:
s101, converting a color fundus retina image into an HSV color space;
s102, adopting median filtering and contrast-limiting self-adaptive histogram equalization in an HSV color space to realize image denoising and contrast enhancement;
s103, filtering the result obtained in the step S102 by using a filter to obtain a retina background by designing the size of the median filter to be 1/10 of the picture width;
s104, using an image superposition technology, marking the result obtained in the step S102 as src1, marking the result obtained in the step S103 as src2, and then performing image fusion to obtain a preprocessing result for eliminating the retina background.
Specifically, step S2 specifically includes:
s201, uniformly processing different arteriovenous label marks in three data sets of different sources of RITE, IOSTAR, AVRDB, wherein the uniform arterial pixel is red (255, 0), the venous pixel is blue (0,0,255), the crossed pixel is green (0,255,0), and the background pixel is black (0, 0);
s202, preprocessing 170 fundus retina images with arteriovenous labels in the step S201 by utilizing the preprocessing method of the step S1, dividing a training set, a verification set and a test set according to the ratio of 6:2:2, and randomly cutting, stretching and translating 102 training set images obtained by dividing to perform data amplification by 50 times;
s203, training and testing a U-Net semantic segmentation model integrated with an Attention mechanism and a VGG structure based on the training set images processed in the steps S201 and S202 to obtain a basic model-VGG-Att-UNet model;
s204, based on a basic model, 1369 color fundus retina images are randomly selected from SUSTech-SUSY and ORIGA public databases, and an iterative semi-supervised learning framework is adopted to train and update the basic model, so that an arteriovenous segmentation model based on semi-supervised learning is obtained: SVA-UNet model.
Further, in step S203, the objective function of the loss function of the VGG-Att-UNet model is:
the method comprises the steps of A and B representing point sets contained in two contour areas, beta=0.7, alpha=0.3, optimizing a cross entropy loss function by adopting an AdaDelta self-adaptive learning rate adjustment optimization method, setting Batch Size to 8, repeating training for 50 rounds, performing model verification on a verification set by adopting cross verification in each round, and finally selecting a model with highest precision on the verification set for storage.
Specifically, step S204 specifically includes:
s2041, first iteration L 0 : the basic model is described as M 0 Then randomly sampling N from unlabeled data 0 685;
s2042 at L k K=1, 2,..in N iterations, sampling N from unlabeled data each time k K=1,..n images;
s2043 is the S obtained by training in the last iteration of each iteration k-1 Sub-model weighted combination pair N k Testing the label-free data to obtain pseudo labels, and calculating the weight w of each sub-model i
S2044 repeating the iterative steps of steps S2042 to S2043 until S k And (5) reducing to 1, stopping iteration, and obtaining the arteriovenous segmentation model.
Specifically, the step S3 specifically includes:
s301, preprocessing fundus retina images in the SUSTech-SYSU and ORIGA data sets by using the preprocessing method of the step S1 based on the collected SUSTech-SYSU and ORIGA data sets;
s302, performing video disc labeling on fundus images in an ORIGA data set by using LabelImg software to obtain corresponding xml labeling files;
s303, randomly selecting 1369 fundus retina images from the preprocessed data and the data marked with the optic disc in the steps S301 and S302, and dividing the fundus retina images into a training set, a verification set and a test set according to a ratio of 6:2:2;
s304, training a YOLO v3 optic disc detection model on a training set, wherein the YOLO v3 optic disc detection model divides an input image into S multiplied by S grids, each grid is responsible for predicting a target falling into the center, and constructing and completing a fundus optic disc detection positioning model based on YOLO v 3.
Further, in step S304, each grid predicts the positions and confidence degrees of B bounding boxes in its own center, where the position of one bounding box corresponds to four values (x, y, w, h) respectively being the coordinate value of the bounding box center and the width and length of the bounding box; the confidence is used for measuring the accuracy of the boundary box to the target position prediction, and the accuracy is calculated as follows:
wherein Pr represents the probability that the bounding box contains the target to be predicted, pr (object) =1 if the bounding box contains the target, or Pr (object) =0, and iou is the ratio between the union and intersection between the real target bounding box and the prediction bounding box; the C classes are co-predicted, yielding a tensor output for YOLO v3 model S x (5 x B + C).
Specifically, step S4 specifically includes:
s401, based on an arteriovenous segmentation and optic disc detection model, testing an input color fundus retina image to obtain an arteriovenous blood vessel segmentation result and an optic disc detection result;
s402, extracting a radius area of the video disc which is 2-3 times of the artery and vein result obtained in the step S401 based on the video disc detected in the step S401;
s403, drawing a black circle with the pixel width of 0.15 times of radius every 0.1 times of radius from the radius of the video disc of the extraction result of the step S402, and covering the extracted blood vessel to obtain a series of blood vessel segments;
s404, acquiring the contour of the vessel segment communication region obtained in the step S403 by utilizing a findContours function in Opencv;
s405, acquiring barycenter coordinates of each contour by using a movement function in Opencv based on the contour of the vessel connected region extracted in the step S404, and marking the barycenter of each vessel segment;
s406, setting (x) based on the barycentric coordinates of the blood vessel extracted in the step S405 v1 ,y v2 ),(x v2 ,y v2 ) Respectively for sampling the same blood vesselBarycentric coordinates of the two vessel segments;
s407, calculating the number of pixels intersected between the vertical line and the blood vessel, and obtaining an approximate calculation value of the blood vessel diameter.
Further, in step S407, the ratio of arterial to venous diameters is calculated as follows:
AVR=CRAE/CRVE
wherein CRAE is central arterial equivalent and CRVE is central venous equivalent.
The invention also provides a system for calculating the diameters of arterial and venous vessels of eyeground retina, which comprises:
the data preprocessing module is used for preprocessing the color fundus retina image;
the arteriovenous segmentation module is used for constructing an eyeground retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and carrying out arteriovenous segmentation on the image preprocessed by the data preprocessing module to obtain an arteriovenous segmentation model;
the optic disc detection and positioning module is used for preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set and constructing a fundus optic disc detection and positioning model based on YOLO v 3;
and the arteriovenous vessel diameter calculation module is used for calculating the number of pixels intersected between an arteriovenous vessel vertical line and a vessel based on an arteriovenous segmentation model obtained by the arteriovenous segmentation module and a fundus optic disk detection and positioning model constructed by the optic disk detection and positioning module to obtain an arteriovenous vessel diameter, and calculating the vessel diameter by adopting an arteriovenous diameter ratio.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention relates to a calculation method of the artery and vein diameters of fundus retina, which is used for preprocessing color retina images and eliminating background differences of fundus retina images with different sources; then training and fusing a U-Net semantic segmentation model of an Attention mechanism and a VGG structure based on the acquired data set with the arteriovenous labels, and retraining the model by adopting a semi-supervised framework based on the data without the arteriovenous labels to obtain the arteriovenous segmentation model, wherein the arteriovenous semantic segmentation based on semi-supervised learning can effectively improve the model performance of basic semantic segmentation; based on the collected video disc detection data set, a YOLO v3 video disc detection model is constructed, and the video disc detection based on YOLO v3 has higher detection accuracy; and finally, obtaining an approximate calculation value of the diameter and the diameter ratio of the arteriovenous blood vessel through optic disk positioning, blood vessel sampling, blood vessel gravity center extraction and vertical line calculation, wherein the retina arteriovenous blood vessel segmentation model and the optic disk detection model are beneficial to the subsequent calculation of the diameter and the diameter ratio of the arteriovenous blood vessel, and can effectively avoid calculation errors caused by blood vessel framework extraction and edge detection based on sampling blood vessel gravity center extraction and vertical line calculation, thereby having more accurate calculation results.
Further, the preprocessing method of step S1 aims at improving the quality of fundus retina images and effectively eliminating background differences between retina images in different databases. The adoption of the small-size median filter can effectively eliminate salt and pepper noise in the picture, the limitation of contrast self-adaptive histogram equalization can effectively improve the contrast of the picture, and the large-size median filter can effectively extract the retina background.
Furthermore, the semantic segmentation model based on iterative semi-supervision in the step S2 aims at accurately segmenting arteries and veins in fundus retina images, and is a basic step of calculating the diameters of arterial and venous blood vessels. According to the method, the VGG-Att-une semantic segmentation model is used as a basis, and the iterative semi-supervision framework is adopted to improve the accuracy of the semantic segmentation model.
Furthermore, the retina arteriovenous blood vessel segmentation model adopts a Tversky loss function as an objective function, so that the problem of unbalanced category in arteriovenous semantic segmentation can be effectively solved. The objective function is provided with weights for balancing false positives and false negatives in the model training process, so that the problem of unbalanced categories in semantic segmentation can be effectively solved.
Furthermore, the semi-supervision framework based on iteration is used for continuously sampling in each iteration, unlabeled retina pictures are input into the model, then a plurality of sub-models are trained by combining the labeled data, and the sub-models are weighted and integrated to generate pseudo labels of the next iteration. The iterative process enables the model to continuously learn information from the unlabeled dataset, greatly improves the use of fundus retina image data, and obviously improves the segmentation performance of the full-supervision semantic segmentation model and the generalization capability of the model.
Further, the step S3 aims to accurately detect and position the optic disc in the fundus retina image, and lays a foundation for further extracting blood vessels around the optic disc to perform diameter calculation. The step adopts the YOLO v3 target detection method to detect and position the video disc, and has high detection efficiency and detection accuracy.
Furthermore, the YOLO target detection algorithm predicts the target position coordinates, the confidence coefficient and the category at the same time, so that the YOLO target detection algorithm has a very fast detection speed.
Further, the step S4 is a method for calculating the diameter of the blood vessel based on the perpendicular line of the blood vessel, which aims to accurately calculate the diameter and the diameter ratio of the arterial and venous blood vessel based on the retinal arterial and venous segmentation and the optic disc detection and positioning result. The vertical line of the blood vessel is calculated by positioning and sampling the barycentric coordinates of the small blood vessel, so that error superposition caused by the acquisition of the vertical line of the blood vessel through the detection of the outline of the blood vessel and the extraction of the blood vessel framework is avoided, and smaller error is caused in calculation.
Furthermore, the ratio of central artery equivalent to central vein equivalent is adopted to calculate the diameter ratio of retinal artery and vein, so that the correlation between the thick and thin blood vessels in the retinal blood vessels is integrated, and the calculation result is more accurate. In conclusion, the invention not only can realize accurate retinal artery and vein vessel segmentation and optic disc detection and positioning, but also can effectively, accurately and automatically realize calculation of the retinal vessel diameter of the fundus oculi.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a computational flow diagram of the present invention;
FIG. 2 is a view of a fundus oculi retinal arteriovenous vessel segmentation framework in accordance with the present invention;
FIG. 3 is a diagram of the VGG-Att-UNet network of the present invention;
FIG. 4 is a diagram of an iterative semi-supervised learning framework of the present invention;
FIG. 5 is a diagram of a YOLO v3 video disc detection positioning network of the present invention;
FIG. 6 is a flow chart of the calculation of the arterial and venous vessel diameters of the retina of the present invention;
fig. 7 is a graph of test results using the method of the present invention, in which (a) is an arteriovenous vessel segmentation result, (b) is a video disc detection result graph, (c) is a schematic view of a vertical line, and (d) is a partially enlarged view drawn with the vertical line.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Various structural schematic diagrams according to the disclosed embodiments of the present invention are shown in the accompanying drawings. The figures are not drawn to scale, wherein certain details are exaggerated for clarity of presentation and may have been omitted. The shapes of the various regions, layers and their relative sizes, positional relationships shown in the drawings are merely exemplary, may in practice deviate due to manufacturing tolerances or technical limitations, and one skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions as actually required.
The invention provides a fundus retina artery and vein blood vessel diameter calculation method, which relates to an image preprocessing method, an artery and vein blood vessel semantic segmentation method based on semi-supervised learning, a visual disc detection method based on YOLO v3 and a blood vessel diameter calculation method based on a blood vessel vertical line, and not only realizes accurate and automatic calculation of fundus retina artery and vein blood vessel diameter and diameter ratio, but also provides a preprocessing method for eliminating background differences of different retina images, a semi-supervised semantic segmentation method for improving model performance and a visual disc detection method based on YOLO v 3.
Referring to fig. 1, the method for calculating the artery and vein diameters of the fundus retina of the present invention comprises the following steps:
s1, preprocessing fundus retina images to eliminate background differences of images from different databases;
s101, firstly, converting a color fundus retina image into an HSV color space according to the following steps of;
wherein, dividing the values of R, G, B by 255 yields R ', G ', B ', changing the gray value range from [0,255] to [0,1], and taking cmax=max (R ', G ', B '), cmin=min (R ', G ', B ') as the maximum and minimum values of the three channels, respectively, and Δ=cmax-Cmin as the difference between the maximum and minimum values.
S102, adopting median filtering and contrast-limiting self-adaptive histogram equalization in an HSV color space to realize image denoising and contrast enhancement;
s103, filtering the result obtained in the step S102 by using a median filter with the size of 1/10 of the picture width to obtain a retina background;
s104, using an image superposition technology, marking the result obtained in the step S102 as src1, marking the result obtained in the step S103 as src2, and performing image fusion by using a formula (2):
dst=α*src1+β*src2+γ (2)
setting the value of alpha to 5, the value of beta to-5, and the value of gamma to 128, the pretreatment result after eliminating the retina background is obtained.
S2, constructing an eye fundus retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, as shown in FIG 2;
s201, uniformly processing the arteriovenous markers in the RITE, IOSTAR, AVRDB data sets with three different sources, wherein the arterial pixels are red (255, 0), the venous pixels are blue (0,0,255), the crossed pixels are green (0,255,0), and the background pixels are black (0, 0);
s202, preprocessing 170 fundus retina images with arteriovenous labels in the step S201 by utilizing the preprocessing method of the step S1, dividing a training set, a verification set and a test set according to the ratio of 6:2:2, and randomly cutting, stretching and translating 102 training set images obtained by dividing to amplify data by 50 times;
s203, training a U-Net semantic segmentation model integrated with an Attention mechanism and a VGG structure based on the processed training set, and marking the training set as a basic model, namely a VGG-Att-UNet model, as shown in figure 3.
The VGG-Att-UNet model adopts a Tversky loss function, which is used for solving the problem of class unbalance in the arteriovenous semantic segmentation, and the objective function of the loss function is as follows:
a and B represent point sets contained in two contour areas, namely pixel point sets corresponding to a prediction area and an actual area, beta=0.7, alpha=0.3, an AdaDelta self-adaptive learning rate adjustment optimization method is selected to optimize a cross entropy loss function, the Batch Size is set to 8, 50 rounds (Epoch) of training is repeated, the verification set is subjected to cross verification in each round, and finally a model with the highest precision on the verification set is selected to store.
S204, based on a basic model, performing model training update on 1369 unlabeled fundus retina images randomly selected in SUSTech-SUSY and ORIGA databases by adopting iterative semi-supervised learning.
Referring to fig. 4, the iterative semi-supervised learning is implemented as follows:
s2041, first iteration L 0 : the basic model is described as M 0 Then randomly sampling N from unlabeled data 0 685;
s2042 at L k K=1, 2,..in N iterations, sampling N from unlabeled data each time k K=1,..n images, N k The calculation is as follows:
training S in each iteration k Sub-model, set S 0 =8. S is carried out on the generated pseudo tag k Sub-random sampling training S with labeled data k Sub-model S k The calculation is as follows:
wherein for k > 1;
s2043 is the S obtained by training in the last iteration of each iteration k-1 Sub-model weighted combination pair N k The label-free data are tested to obtain pseudo labels, and the weight w of each sub-model is calculated i The calculation is as follows:
wherein,the segmentation probability map of the ith sub-model on the pixel level obtained by the label-free data in the kth iteration is shown as a superscript j, and the superscript j represents the image pixels containing all R pixels; />Is the sum of the probabilities of the ith sub-model for the k-th unlabeled image. The greater the weight, the greater the consistency of the single sub-model prediction with the combined predictions of all sub-model predictions.
S2044 repeating the iterative steps of steps S2042 to S2043 until S k And (5) reducing to 1, stopping iteration, and obtaining the arteriovenous segmentation model.
S205, evaluating the test set by adopting pixel accuracy, average cross-over ratio and frequency weight cross-over comparison model.
S3, constructing a eyeground optic disc detection positioning model based on YOLO v3, as shown in FIG. 5;
s301, preprocessing retina images in the SUSTech-SYSU and ORIGA data sets by using a preprocessing method in the step S1 based on the collected SUSTech-SYSU and ORIGA data sets;
s302, performing video disc labeling on 650 fundus images in an ORIGA data set by using LabelImg software to obtain a corresponding xml labeling file;
s303, dividing a training set, a verification set and a test set of 6:2:2 for 1369 fundus pictures in a database;
s304, training a YOLO v3 video disc detection model on 821 training sets, wherein YOLO v3 divides an input image into S multiplied by S grids, and each grid is responsible for predicting a target falling into the center of the grid. Each grid predicts the positions and confidence levels of B bounding boxes taking the grid as the center, wherein the positions of one bounding box correspond to four values (x, y, w, h) which are respectively the coordinate value of the center of the bounding box and the width and length of the bounding box; the confidence is used to measure the accuracy of the bounding box to the target position prediction and is calculated as follows:
wherein Pr represents the probability that the bounding box contains the target to be predicted, pr (object) =1 if the bounding box contains the target, or Pr (object) =0, and iou is the ratio between the union and intersection between the real target bounding box and the prediction bounding box; assuming that C classes are co-predicted, a tensor with model output of sxsx (5 xb+c) can be obtained.
The model adopts an Adam optimization function to adjust model parameters, and the objective function of YOLO v3 is formula (9); model monitoring is carried out by adopting the loss on the verification set, and the model with the minimum loss in the verification set is stored for application.
Wherein,representing the inclusion of an object in grid i, < >>The j-th bounding box represented in grid i is trusted for prediction of the target. The first two terms of the loss function are used for the prediction of the bounding box position coordinates, the third term is used for the prediction of the bounding box confidence containing the target, the fourth term is used for the prediction of the bounding box confidence not containing the target, and the last term is used for the prediction of the target class.
And S305, performing model evaluation on 274 test sets by using the average cross ratio.
S4, calculating the diameters of the fundus artery and vein blood vessels based on the blood vessel perpendicular lines, as shown in fig. 6.
S401, testing the input color fundus retina image based on the artery and vein segmentation model and the optic disc detection positioning model in the step S2 and the step S3, and obtaining an artery and vein blood vessel segmentation result and an optic disc detection result;
s402, extracting a 2-3 times video disc radius area of the artery and vein result obtained in the step S401 based on the detected video disc;
s403, drawing a black circle with the pixel width of 0.15 times of radius every 0.1 time of radius from 2 times of radius of the video disc to cover the extracted blood vessel to obtain a series of blood vessel segments;
s404, acquiring the outline of a blood vessel segment communication region by utilizing a findContours function in Opencv due to existence of blood vessel bifurcation and impurity pixel points of segmentation results, wherein the existence of the conditions can bring a lot of interference to diameter calculation, and the outline is used for removing the blood vessel interference with larger outline area;
s405, acquiring the barycenter coordinate of each contour by using a movement function in Opencv based on the blood vessel contour extracted in the step S404, and marking the barycenter of each blood vessel segment;
s406, based on the barycentric coordinates of the blood vessel extracted in the step S405, if (x v1 ,y v2 ),(x v2 ,y v2 ) The barycentric coordinates of two vessel segments sampled for the same vessel respectively can be used for determining the straight line equation of the two barycenters as follows:
wherein,the slope of the straight line, and thus the vertical equation that passes through the midpoint of the two centers of gravity is calculated as follows:
s407, calculating the number of pixels intersected between the vertical line and the blood vessel, and obtaining an approximate calculation value of the blood vessel diameter.
The calculation formula of the artery-vein diameter ratio is adopted: the ratio of central arterial equivalent (Central Retinal Artery Equivalent, abbreviated as CRAE) to central venous equivalent (Central Retinal Vein Equivalent, abbreviated as CRVE) is calculated as follows:
wherein w is i ,w j The vessel diameters of the wide vessel and the narrow vessel, respectively, and the arterial-venous vessel diameter ratio were calculated as follows:
AVR=CRAE/CRVE (13)
s408, verifying accuracy by calculating an error between the obtained result and a gold standard in the INSPIRE-AVR database.
In still another embodiment of the present invention, a system for calculating an arterial venous vessel diameter of a fundus retina is provided, which can be used to implement the above-mentioned arterial venous vessel diameter calculation method of a fundus retina, and specifically, the arterial venous vessel diameter calculation system of a fundus retina includes a data preprocessing module, an arterial venous segmentation module, a optic disc detection positioning module, and an arterial venous vessel diameter calculation module.
The data preprocessing module is used for preprocessing the color fundus retina image;
the arteriovenous segmentation module is used for constructing an eyeground retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and carrying out arteriovenous segmentation on the image preprocessed by the data preprocessing module to obtain an arteriovenous segmentation model;
the optic disc detection and positioning module is used for preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set and constructing a fundus optic disc detection and positioning model based on YOLO v 3;
and the arteriovenous vessel diameter calculation module is used for calculating the number of pixels intersected between an arteriovenous vessel vertical line and a vessel based on an arteriovenous segmentation model obtained by the arteriovenous segmentation module and a fundus optic disk detection and positioning model constructed by the optic disk detection and positioning module to obtain an arteriovenous vessel diameter, and calculating the vessel diameter by adopting an arteriovenous diameter ratio.
In yet another embodiment of the present invention, a terminal device is provided, the terminal device including a processor and a memory, the memory for storing a computer program, the computer program including program instructions, the processor for executing the program instructions stored by the computer storage medium. The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf Programmable gate arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., which are the computational core and control core of the terminal adapted to implement one or more instructions, in particular adapted to load and execute one or more instructions to implement a corresponding method flow or a corresponding function; the processor of the embodiment of the invention can be used for the operation of the fundus retina arteriovenous vessel diameter calculation method, and comprises the following steps:
preprocessing a color fundus retina image; constructing an iterative semi-supervised learning-based fundus retina arteriovenous semantic segmentation model, and performing arteriovenous segmentation on the preprocessed image to obtain an arteriovenous segmentation model; preprocessing retina images in a SUSTech-SYSU data set and an ORIGA data set, and constructing a fundus optic disc detection positioning model based on YOLO v 3; and calculating the number of pixels intersected between the vertical line of the artery and vein blood vessel and the blood vessel based on the artery and vein segmentation model and the fundus optic disk detection positioning model to obtain the artery and vein blood vessel diameter, and calculating the blood vessel diameter by adopting the artery and vein diameter ratio.
In a further embodiment of the present invention, the present invention also provides a storage medium, in particular, a computer readable storage medium (Memory), which is a Memory device in a terminal device, for storing programs and data. It will be appreciated that the computer readable storage medium herein may include both a built-in storage medium in the terminal device and an extended storage medium supported by the terminal device. The computer-readable storage medium provides a storage space storing an operating system of the terminal. Also stored in the memory space are one or more instructions, which may be one or more computer programs (including program code), adapted to be loaded and executed by the processor. The computer readable storage medium herein may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory.
One or more instructions stored in a computer-readable storage medium may be loaded and executed by a processor to implement the respective steps of the method for calculating an arteriovenous vessel diameter of a fundus retina in the above-described embodiments; one or more instructions in a computer-readable storage medium are loaded by a processor and perform the steps of:
preprocessing a color fundus retina image; constructing an iterative semi-supervised learning-based fundus retina arteriovenous semantic segmentation model, and performing arteriovenous segmentation on the preprocessed image to obtain an arteriovenous segmentation model; preprocessing retina images in a SUSTech-SYSU data set and an ORIGA data set, and constructing a fundus optic disc detection positioning model based on YOLO v 3; and calculating the number of pixels intersected between the vertical line of the artery and vein blood vessel and the blood vessel based on the artery and vein segmentation model and the fundus optic disk detection positioning model to obtain the artery and vein blood vessel diameter, and calculating the blood vessel diameter by adopting the artery and vein diameter ratio.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Taking database INSP IRE-AVR as an example, the method is tested, the obtained arteriovenous vessel segmentation result, the optic disc detection result and the vessel plumb line calculation result are shown in figure 7, and the calculated arteriovenous diameters are shown in table 1.
TABLE 1 arteriovenous vessel diameter ratio calculated by the present method for INSPIRE-AVR dataset
/>
The error between the result of the invention and the gold standard is calculated, the mean value and standard deviation of the error are shown in table 2, and the calculated result of the invention has small error with the gold standard.
TABLE 2 average error Table between calculated results and standard values of the invention
/>
In summary, according to the method for calculating the retinal artery and vein blood vessel diameter of the fundus oculi, through the test in the embodiment, a small average error exists between the obtained retinal artery and vein blood vessel diameter ratio and the two published gold standards in the embodiment, which indicates that the accuracy of the method for calculating the retinal artery and vein blood vessel diameter is higher; the standard deviation of the obtained error is smaller, which indicates that the calculation method of the invention is more stable.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above is only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited by this, and any modification made on the basis of the technical scheme according to the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (9)

1. The method for calculating the artery and vein diameters of the fundus retina is characterized by comprising the following steps of:
s1, preprocessing a color fundus retina image;
s2, constructing an iterative semi-supervised learning-based fundus retina arteriovenous semantic segmentation model, and performing arteriovenous segmentation on the image preprocessed in the step S1 to obtain an arteriovenous segmentation model;
s3, preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set, and constructing a fundus optic disc detection positioning model based on YOLO v 3;
s4, calculating the number of pixels intersected between the perpendicular line of the artery and vein and the blood vessel based on the artery and vein segmentation model obtained in the step S2 and the fundus disc detection and positioning model constructed in the step S3 to obtain the artery and vein blood vessel diameter, and calculating the blood vessel diameter by adopting the artery and vein diameter ratio;
the step S4 specifically includes:
s401, based on an arteriovenous segmentation and optic disc detection model, testing an input color fundus retina image to obtain an arteriovenous blood vessel segmentation result and an optic disc detection result;
s402, extracting a radius area of the video disc which is 2-3 times of the artery and vein result obtained in the step S401 based on the video disc detected in the step S401;
s403, drawing a black circle with the pixel width of 0.15 times of radius every 0.1 times of radius from the radius of the video disc of the extraction result of the step S402, and covering the extracted blood vessel to obtain a series of blood vessel segments;
s404, acquiring the contour of the vessel segment communication region obtained in the step S403 by utilizing a findContours function in Opencv;
s405, acquiring barycenter coordinates of each contour by using a movement function in Opencv based on the contour of the vessel connected region extracted in the step S404, and marking the barycenter of each vessel segment;
s406, setting (x) based on the barycentric coordinates of the blood vessel extracted in the step S405 v1 ,y v2 ),(x v2 ,y v2 ) Barycentric coordinates of two blood vessel sections sampled by the same blood vessel respectively;
s407, calculating the number of pixels intersected between the vertical line and the blood vessel, and obtaining an approximate calculation value of the blood vessel diameter.
2. The method according to claim 1, wherein the step S1 of preprocessing is specifically:
s101, converting a color fundus retina image into an HSV color space;
s102, adopting median filtering and contrast-limiting self-adaptive histogram equalization in an HSV color space to realize image denoising and contrast enhancement;
s103, filtering the result obtained in the step S102 by using a filter to obtain a retina background by designing the size of the median filter to be 1/10 of the picture width;
s104, using an image superposition technology, marking the result obtained in the step S102 as src1, marking the result obtained in the step S103 as src2, and then performing image fusion to obtain a preprocessing result for eliminating the retina background.
3. The method according to claim 1, wherein step S2 is specifically:
s201, uniformly processing different arteriovenous label marks in three data sets of different sources of RITE, IOSTAR, AVRDB, wherein the uniform arterial pixel is red (255, 0), the venous pixel is blue (0,0,255), the crossed pixel is green (0,255,0), and the background pixel is black (0, 0);
s202, preprocessing 170 fundus retina images with arteriovenous labels in the step S201 by utilizing the preprocessing method of the step S1, dividing a training set, a verification set and a test set according to the ratio of 6:2:2, and randomly cutting, stretching and translating 102 training set images obtained by dividing to perform data amplification by 50 times;
s203, training and testing a U-Net semantic segmentation model integrated with an Attention mechanism and a VGG structure based on the training set images processed in the steps S201 and S202 to obtain a basic model-VGG-Att-UNet model;
s204, based on a basic model, 1369 color fundus retina images are randomly selected from SUSTech-SUSY and ORIGA public databases, and an iterative semi-supervised learning framework is adopted to train and update the basic model, so that an arteriovenous segmentation model based on semi-supervised learning is obtained: SVA-UNet model.
4. A method according to claim 3, characterized in that in step S203, the objective function of the loss function of the VGG-Att-UNet model is:
the method comprises the steps of A and B representing point sets contained in two contour areas, beta=0.7, alpha=0.3, optimizing a cross entropy loss function by adopting an AdaDelta self-adaptive learning rate adjustment optimization method, setting Batch Size to 8, repeating training for 50 rounds, performing model verification on a verification set by adopting cross verification in each round, and finally selecting a model with highest precision on the verification set for storage.
5. The method according to claim 1, wherein step S204 is specifically:
s2041, first iteration L 0 : the basic model is recorded as M 0 Then randomly sampling N from unlabeled data 0 685;
s2042 at L k K=1, 2,..in N iterations, sampling N from unlabeled data each time k K=1,..n images;
s2043 is the S obtained by training in the last iteration of each iteration k-1 Sub-model weighted combination pair N k Testing the label-free data to obtain pseudo labels, and calculating the weight w of each sub-model i
S2044 repeating the iterative steps of steps S2042 to S2043 until S k And (5) reducing to 1, stopping iteration, and obtaining the arteriovenous segmentation model.
6. The method according to claim 1, wherein step S3 is specifically:
s301, preprocessing fundus retina images in the SUSTech-SYSU and ORIGA data sets by using the preprocessing method of the step S1 based on the collected SUSTech-SYSU and ORIGA data sets;
s302, performing video disc labeling on fundus images in an ORIGA data set by using LabelImg software to obtain corresponding xml labeling files;
s303, randomly selecting 1369 fundus retina images from the preprocessed data and the data marked with the optic disc in the steps S301 and S302, and dividing the fundus retina images into a training set, a verification set and a test set according to a ratio of 6:2:2;
s304, training a YOLO v3 optic disc detection model on a training set, wherein the YOLO v3 optic disc detection model divides an input image into S multiplied by S grids, each grid is responsible for predicting a target falling into the center, and constructing and completing a fundus optic disc detection positioning model based on YOLO v 3.
7. The method according to claim 6, wherein in step S304, each grid predicts with the position and confidence of B bounding boxes in its center, the position of one bounding box corresponding to four values (x, y, w, h), the coordinate value of the bounding box center and the width and length of the bounding box, respectively; the confidence is used for measuring the accuracy of the boundary box to the target position prediction, and the accuracy is calculated as follows:
wherein Pr represents the probability that the bounding box contains the target to be predicted, pr (object) =1 if the bounding box contains the target, or Pr (object) =0, and iou is the ratio between the union and intersection between the real target bounding box and the prediction bounding box; the C classes are co-predicted, yielding a tensor output for YOLO v3 model S x (5 x B + C).
8. The method according to claim 1, wherein in step S407, the ratio of arterial to venous diameters is calculated as follows:
AVR=CRAE/CRVE
wherein CRAE is central arterial equivalent and CRVE is central venous equivalent.
9. A fundus retinal artery and vein vessel diameter calculation system, comprising:
the data preprocessing module is used for preprocessing the color fundus retina image;
the arteriovenous segmentation module is used for constructing an eyeground retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and carrying out arteriovenous segmentation on the image preprocessed by the data preprocessing module to obtain an arteriovenous segmentation model;
the optic disc detection and positioning module is used for preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set and constructing a fundus optic disc detection and positioning model based on YOLO v 3;
an arteriovenous vessel diameter calculation module for calculating the number of pixels intersected between the vertical line of the arteriovenous vessel and the vessel based on an arteriovenous segmentation model obtained by the arteriovenous segmentation module and a fundus optic disk detection and positioning model constructed by the optic disk detection and positioning module to obtain the arteriovenous vessel diameter, calculating the vessel diameter by adopting an arteriovenous diameter ratio,
wherein the arteriovenous vessel diameter calculation module calculates the vessel diameter by:
s401, based on an arteriovenous segmentation and optic disc detection model, testing an input color fundus retina image to obtain an arteriovenous blood vessel segmentation result and an optic disc detection result;
s402, extracting a radius area of the video disc which is 2-3 times of the artery and vein result obtained in the step S401 based on the video disc detected in the step S401;
s403, drawing a black circle with the pixel width of 0.15 times of radius every 0.1 times of radius from the radius of the video disc of the extraction result of the step S402, and covering the extracted blood vessel to obtain a series of blood vessel segments;
s404, acquiring the contour of the vessel segment communication region obtained in the step S403 by utilizing a findContours function in Opencv;
s405, acquiring barycenter coordinates of each contour by using a movement function in Opencv based on the contour of the vessel connected region extracted in the step S404, and marking the barycenter of each vessel segment;
s406, setting (x) based on the barycentric coordinates of the blood vessel extracted in the step S405 v1 ,y v2 ),(x v2 ,y v2 ) Barycentric coordinates of two blood vessel sections sampled by the same blood vessel respectively;
s407, calculating the number of pixels intersected between the vertical line and the blood vessel, and obtaining an approximate calculation value of the blood vessel diameter.
CN202110536793.9A 2021-05-17 2021-05-17 Fundus retina artery and vein vessel diameter calculation method and system Active CN113269737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110536793.9A CN113269737B (en) 2021-05-17 2021-05-17 Fundus retina artery and vein vessel diameter calculation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110536793.9A CN113269737B (en) 2021-05-17 2021-05-17 Fundus retina artery and vein vessel diameter calculation method and system

Publications (2)

Publication Number Publication Date
CN113269737A CN113269737A (en) 2021-08-17
CN113269737B true CN113269737B (en) 2024-03-19

Family

ID=77231335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110536793.9A Active CN113269737B (en) 2021-05-17 2021-05-17 Fundus retina artery and vein vessel diameter calculation method and system

Country Status (1)

Country Link
CN (1) CN113269737B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792740B (en) * 2021-09-16 2023-12-26 平安创科科技(北京)有限公司 Artery and vein segmentation method, system, equipment and medium for fundus color illumination
WO2023240319A1 (en) * 2022-06-16 2023-12-21 Eyetelligence Limited Fundus image analysis system
CN115511883B (en) * 2022-11-10 2023-04-18 北京鹰瞳科技发展股份有限公司 Method, apparatus and storage medium for determining curvature of retinal fundus blood vessel

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103006175A (en) * 2012-11-14 2013-04-03 天津工业大学 Method for positioning optic disk for eye fundus image on basis of PC (Phase Congruency)
EP3247257A1 (en) * 2015-01-19 2017-11-29 Statumanu ICP ApS Method and apparatus for non-invasive assessment of intracranial pressure
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN109685770A (en) * 2018-12-05 2019-04-26 合肥奥比斯科技有限公司 Retinal vessel curvature determines method
CN111000563A (en) * 2019-11-22 2020-04-14 北京理工大学 Automatic measuring method and device for retinal artery and vein diameter ratio
CN111222361A (en) * 2018-11-23 2020-06-02 福州依影健康科技有限公司 Method and system for analyzing hypertension retina vascular change characteristic data
CN111242933A (en) * 2020-01-15 2020-06-05 华南理工大学 Retina image artery and vein classification device, equipment and storage medium
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN111681242A (en) * 2020-08-14 2020-09-18 北京至真互联网技术有限公司 Retinal vessel arteriovenous distinguishing method, device and equipment
CN112075922A (en) * 2020-10-14 2020-12-15 中国人民解放军空军军医大学 Method for measuring fundus image indexes of type 2 diabetes mellitus and analyzing correlation between fundus image indexes and diabetic nephropathy
CN112716446A (en) * 2020-12-28 2021-04-30 深圳硅基智能科技有限公司 Method and system for measuring pathological change characteristics of hypertensive retinopathy

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180146370A1 (en) * 2016-11-22 2018-05-24 Ashok Krishnaswamy Method and apparatus for secured authentication using voice biometrics and watermarking
US20190014982A1 (en) * 2017-07-12 2019-01-17 iHealthScreen Inc. Automated blood vessel feature detection and quantification for retinal image grading and disease screening
US11151718B2 (en) * 2019-10-30 2021-10-19 Nikon Corporation Image processing method, image processing device, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103006175A (en) * 2012-11-14 2013-04-03 天津工业大学 Method for positioning optic disk for eye fundus image on basis of PC (Phase Congruency)
EP3247257A1 (en) * 2015-01-19 2017-11-29 Statumanu ICP ApS Method and apparatus for non-invasive assessment of intracranial pressure
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN111222361A (en) * 2018-11-23 2020-06-02 福州依影健康科技有限公司 Method and system for analyzing hypertension retina vascular change characteristic data
CN109685770A (en) * 2018-12-05 2019-04-26 合肥奥比斯科技有限公司 Retinal vessel curvature determines method
CN111000563A (en) * 2019-11-22 2020-04-14 北京理工大学 Automatic measuring method and device for retinal artery and vein diameter ratio
CN111242933A (en) * 2020-01-15 2020-06-05 华南理工大学 Retina image artery and vein classification device, equipment and storage medium
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN111681242A (en) * 2020-08-14 2020-09-18 北京至真互联网技术有限公司 Retinal vessel arteriovenous distinguishing method, device and equipment
CN112075922A (en) * 2020-10-14 2020-12-15 中国人民解放军空军军医大学 Method for measuring fundus image indexes of type 2 diabetes mellitus and analyzing correlation between fundus image indexes and diabetic nephropathy
CN112716446A (en) * 2020-12-28 2021-04-30 深圳硅基智能科技有限公司 Method and system for measuring pathological change characteristics of hypertensive retinopathy

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Generation of new blood vessels in the human retina with L-system fractal construction;Guedri Hichem;2012 6th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT);全文 *
动静脉血管自动分类方法及其管径测量;薛岚燕;曹新容;林嘉雯;郑绍华;余轮;;仪器仪表学报(09);全文 *
视网膜微血管直径测量及与糖尿病关系的研究现状;郭倩茹;徐应军;;中国煤炭工业医学杂志(12);全文 *
语义融合眼底图像动静脉分类方法;高颖琪;郭松;李宁;王恺;康宏;李涛;;中国图象图形学报(10);全文 *

Also Published As

Publication number Publication date
CN113269737A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN113269737B (en) Fundus retina artery and vein vessel diameter calculation method and system
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
US11487995B2 (en) Method and apparatus for determining image quality
KR101640998B1 (en) Image processing apparatus and image processing method
CN109191476A (en) The automatic segmentation of Biomedical Image based on U-net network structure
CN107657612A (en) Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN109584209B (en) Vascular wall plaque recognition apparatus, system, method, and storage medium
CN111931811B (en) Calculation method based on super-pixel image similarity
CN110059586B (en) Iris positioning and segmenting system based on cavity residual error attention structure
Liu et al. A framework of wound segmentation based on deep convolutional networks
CN109815919A (en) A kind of people counting method, network, system and electronic equipment
CN108961675A (en) Fall detection method based on convolutional neural networks
CN109816666B (en) Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium
CN112614133B (en) Three-dimensional pulmonary nodule detection model training method and device without anchor point frame
CN112819821B (en) Cell nucleus image detection method
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN113706579A (en) Prawn multi-target tracking system and method based on industrial culture
CN110472673B (en) Parameter adjustment method, fundus image processing device, fundus image processing medium and fundus image processing apparatus
CN113469963B (en) Pulmonary artery image segmentation method and device
CN113221731B (en) Multi-scale remote sensing image target detection method and system
CN114511898A (en) Pain recognition method and device, storage medium and electronic equipment
CN112001921B (en) New coronary pneumonia CT image focus segmentation image processing method based on focus weighting loss function
CN116468702A (en) Chloasma assessment method, device, electronic equipment and computer readable storage medium
CN111127400A (en) Method and device for detecting breast lesions
CN110287991A (en) Plant crude drug authenticity verification method, apparatus, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Guo Peihong

Inventor after: Zhang Dalei

Inventor after: Zu Jian

Inventor after: Hu Na

Inventor before: Zu Jian

Inventor before: Guo Peihong

Inventor before: Hu Na

Inventor before: Zhang Dalei

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210924

Address after: Room 21, floor 4, building 2, yard a 2, North West Third Ring Road, Haidian District, Beijing 100048

Applicant after: Beijing Yingtong Technology Development Co.,Ltd.

Address before: 710049 No. 28 West Xianning Road, Shaanxi, Xi'an

Applicant before: XI'AN JIAOTONG University

GR01 Patent grant
GR01 Patent grant