CN113269737A - Method and system for calculating diameter of artery and vein of retina - Google Patents

Method and system for calculating diameter of artery and vein of retina Download PDF

Info

Publication number
CN113269737A
CN113269737A CN202110536793.9A CN202110536793A CN113269737A CN 113269737 A CN113269737 A CN 113269737A CN 202110536793 A CN202110536793 A CN 202110536793A CN 113269737 A CN113269737 A CN 113269737A
Authority
CN
China
Prior art keywords
arteriovenous
model
fundus
retina
optic disc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110536793.9A
Other languages
Chinese (zh)
Other versions
CN113269737B (en
Inventor
祖建
郭佩宏
胡娜
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Airdoc Technology Co Ltd
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202110536793.9A priority Critical patent/CN113269737B/en
Publication of CN113269737A publication Critical patent/CN113269737A/en
Application granted granted Critical
Publication of CN113269737B publication Critical patent/CN113269737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a method and a system for calculating the diameter of artery and vein of fundus retina, which preprocess a color fundus retina image; constructing an eye fundus retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and performing arteriovenous segmentation on the preprocessed image to obtain an arteriovenous segmentation model; preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set, and constructing a fundus optic disc detection positioning model based on YOLO v 3; and calculating the number of pixels intersected between the artery and vein blood vessel vertical lines and the blood vessels based on the artery and vein segmentation model and the fundus optic disc detection positioning model to obtain the diameter of the artery and vein blood vessels, and calculating the diameter of the blood vessels by adopting the diameter ratio of the artery and vein. The invention not only can realize accurate retina artery and vein vessel segmentation and optic disc detection and positioning, but also can effectively, accurately and automatically realize the calculation of the diameter of the retina vessel of the eye ground.

Description

Method and system for calculating diameter of artery and vein of retina
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a system for calculating the diameter of an artery and vein of a retina.
Background
At present, the diameter and diameter ratio parameters of the fundus arteriovenous blood vessel are obtained manually or by means of semi-automatic software from experts, the arteriovenous blood vessel is extracted by manually selecting a blood vessel section and then calculating the diameter and diameter ratio, the efficiency of the calculation process is low, and the professional literacy of a calculator has high requirements, so that the automatic fundus retinal arteriovenous blood vessel extraction, the automatic positioning of a to-be-measured blood vessel diameter calculation area and the calculation of the arteriovenous diameter and diameter ratio are very meaningful researches.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method and a system for calculating the diameter of an arteriovenous vessel of a fundus retina to solve the problem of low efficiency in acquiring diameter parameters of arteriovenous vessels of a retina at present.
The invention adopts the following technical scheme:
a method for calculating the diameter of an artery and vein of a fundus retina comprises the following steps:
s1, preprocessing the color fundus retina image;
s2, constructing an eye fundus retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and performing arteriovenous segmentation on the image preprocessed in the step S1 to obtain an arteriovenous segmentation model;
s3, preprocessing the retina images in the SUSTech-SYSU data set and the ORIGA data set, and constructing a fundus optic disc detection positioning model based on YOLO v 3;
s4, calculating the number of pixels of intersection between the artery and vein blood vessel vertical lines and the blood vessel based on the arteriovenous segmentation model obtained in the step S2 and the fundus optic disc detection positioning model constructed in the step S3 to obtain the artery and vein blood vessel diameter, and calculating the blood vessel diameter by adopting the arteriovenous diameter ratio.
Specifically, the step S1 includes:
s101, converting the color fundus retina image into an HSV color space;
s102, image denoising and contrast enhancement are realized by adopting median filtering and contrast-limiting adaptive histogram equalization in an HSV color space;
s103, filtering the result obtained in the step S102 by using a filter to obtain a retina background by designing 1/10 with the size of the median filter as the width of the picture;
and S104, recording the result obtained in the step S102 as src1 and the result obtained in the step S103 as src2 by using an image superposition technology, and then performing image fusion to obtain a preprocessing result for eliminating the retinal background.
Specifically, step S2 specifically includes:
s201, different artery and vein label marks in three different source data sets of RITE, IOSTAR and AVRDB are processed in a unified mode, and the artery pixel is red (255,0,0), the vein pixel is blue (0,0,255), the crossed pixel is green (0,255,0) and the background pixel is black (0,0, 0);
s202, utilizing the preprocessing method of the step S1 to preprocess 170 subretinal retinal images with arteriovenous labels in the step S201, dividing a training set, a verification set and a test set according to the ratio of 6:2:2, and randomly cutting, stretching and translating 102 training set images obtained by division to perform data amplification by 50 times;
s203, training and testing a U-Net semantic segmentation model integrated with an Attention mechanism and a VGG structure based on the training set images processed in the steps S201 and S202 to obtain a basic model-VGG-Att-UNet model;
s204, based on the basic model, randomly selecting 1369 color fundus retina images from the SUSTech-SUSY and ORIGA public databases, and adopting an iterative semi-supervised learning frame to train and update the basic model to obtain an arteriovenous segmentation model based on semi-supervised learning: SVA-UNet model.
Further, in step S203, the objective function of the loss function of the VGG-Att-UNet model is:
Figure BDA0003069983500000031
a and B represent point sets contained in two contour regions, beta is 0.7, alpha is 0.3, an AdaDelta adaptive learning rate adjustment optimization method is selected to optimize a cross entropy loss function, Batch Size is set to be 8, 50 rounds of training are repeated, cross validation is performed on the validation set in each round, and finally a model with the highest precision on the validation set is selected to be stored.
Specifically, step S204 specifically includes:
s2041, first iteration L0: the basic model is recorded as M0Then randomly sampling N from the unlabeled data0Is 685;
s2042, at LkN iterations, N samples are taken from the unlabeled data each timekN images, k being 1.;
s2043 and S obtained by training in last iteration in each iterationk-1Pair of submodel weighted combinations NkTesting the label-free data to obtain a pseudo label, and calculating the weight w of each sub-modeli
S2044, repeating the iteration steps from the step S2042 to the step S2043 until SkAnd subtracting to 1, and stopping iteration to obtain an arteriovenous segmentation model.
Specifically, step S3 specifically includes:
s301, preprocessing the fundus retina image in the SUSTech-SYSU and ORIGA data sets by using the preprocessing method in the step S1 based on the collected SUSTech-SYSU and ORIGA data sets;
s302, performing optic disc labeling on the fundus image in the ORIGA data set by using LabelImg software to obtain a corresponding xml labeling file;
s303, randomly selecting 1369 fundus retinal images from the data preprocessed in the steps S301 and S302 and labeled with the optic discs, and dividing the fundus retinal images into a training set, a verification set and a test set according to the ratio of 6:2: 2;
s304, training a YOLO v3 optic disc detection model on the training set, dividing the input image into grids of S multiplied by S by the YOLO v3 optic disc detection model, wherein each grid is responsible for predicting a target falling into the center, and constructing and completing the fundus optic disc detection positioning model based on the YOLO v 3.
Further, in step S304, each grid is predicted according to the positions and confidence degrees of the B bounding boxes in the center of the grid, and the position of one bounding box corresponds to four values (x, y, w, h), which are the coordinate value of the center of the bounding box and the width and length of the bounding box respectively; the confidence coefficient is used for measuring the accuracy of the target position prediction by the bounding box, and is specifically calculated as follows:
Figure BDA0003069983500000041
wherein Pr represents the probability that the bounding box contains the target to be predicted, Pr (object) is 1 if the bounding box contains the target, otherwise Pr (object) is 0, and IOU is the ratio between the union and the intersection between the real target bounding box and the predicted bounding box; c classes are predicted together, resulting in a tensor with the YOLO v3 model output of S × (5 × B + C).
Specifically, step S4 specifically includes:
s401, testing an input color fundus retina image based on an arteriovenous segmentation and optic disc detection model to obtain an arteriovenous vessel segmentation result and an optic disc detection result;
s402, extracting a 2-3 times optic disc radius area of the arteriovenous result obtained in the step S401 based on the optic disc detected in the step S401;
s403, drawing a black circle with the pixel width of 0.15 times of the radius at intervals of 0.1 and half diameters from the 2 times of the radius of the optic disc to the extraction result of the step S402, and covering the extracted blood vessel to obtain a series of blood vessel sections;
s404, acquiring the contour of the blood vessel section communication area obtained in the step S403 by using a findContours function in Opencv;
s405, based on the contour of the blood vessel communication area extracted in the step S404, acquiring barycentric coordinates of each contour by using a moments function in Opencv, and marking the barycentric of each blood vessel section;
s406, based on the barycentric coordinates of the blood vessels extracted in step S405, let (x)v1,yv2),(xv2,yv2) The gravity center coordinates of two blood vessel sections sampled for the same blood vessel respectively;
s407, calculating the number of pixels intersected between the vertical line and the blood vessel to obtain an approximate calculated value of the diameter of the blood vessel.
Further, in step S407, the arteriovenous diameter ratio is calculated as follows:
AVR=CRAE/CRVE
wherein CRAE is the central artery equivalent and CRVE is the central vein equivalent.
Another technical solution of the present invention is a system for calculating a diameter of an arteriovenous vessel of a fundus retina, comprising:
the data preprocessing module is used for preprocessing the color fundus retina image;
the arteriovenous segmentation module is used for constructing an eye fundus retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and performing arteriovenous segmentation on the image preprocessed by the data preprocessing module to obtain an arteriovenous segmentation model;
the optic disc detection positioning module is used for preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set and constructing a fundus optic disc detection positioning model based on YOLO v 3;
and the arteriovenous vessel diameter calculation module is used for calculating the number of pixels intersected between the perpendicular line of the arteriovenous vessel and the vessel based on the arteriovenous segmentation model obtained by the arteriovenous segmentation module and the fundus optic disc detection positioning model constructed by the optic disc detection positioning module to obtain the arteriovenous vessel diameter, and calculating the vessel diameter by adopting the arteriovenous diameter ratio.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention relates to a method for calculating the diameter of artery and vein of fundus retina, which preprocesses a color retina image and can eliminate the background difference of fundus retina images from different sources; then training a U-Net semantic segmentation model fusing an Attention mechanism and a VGG structure based on the collected data set with the arteriovenous labels, and retraining the model by adopting a semi-supervised frame based on data without arteriovenous labels to obtain the arteriovenous segmentation model of the invention; a YOLO v3 optic disc detection model is constructed based on the collected optic disc detection data set, and the optic disc detection based on YOLO v3 has higher detection accuracy; and finally, obtaining an approximate calculation value of the diameter and the diameter ratio of the arteriovenous blood vessel through optic disc positioning, blood vessel sampling, blood vessel gravity center extraction and perpendicular line calculation, wherein the retinal arteriovenous blood vessel segmentation model and the optic disc detection model are favorable for calculating the diameter and the diameter ratio of the subsequent arteriovenous blood vessel, and the calculation errors caused by blood vessel skeleton extraction and edge detection can be effectively avoided based on the blood vessel gravity center extraction and the perpendicular line calculation of the sampling, so that a more accurate calculation result is obtained.
Further, the preprocessing method of step S1 is intended to improve the quality of fundus retinal images and effectively eliminate background differences between retinal images in different databases. The small-size median filter can effectively eliminate salt and pepper noise in the picture, the contrast of the picture can be effectively improved by limiting contrast self-adaptive histogram equalization, and the large-size median filter can effectively extract the retina background.
Further, the semantic segmentation model based on the iterative semi-supervision in step S2 aims to accurately segment the arteries and veins in the fundus retinal image, and is a basic step of arteriovenous vessel diameter calculation. The method comprises the following steps that a VGG-Att-UNet semantic segmentation model is used as a basis, and an iterative semi-supervised framework is adopted to improve the accuracy of the semantic segmentation model.
Furthermore, the retinal artery and vein segmentation model adopts a Tverseky loss function as a target function, so that the problem of unbalanced category in arteriovenous semantic segmentation can be effectively solved. The objective function sets the weight to balance false positive and false negative in the model training process, so that the problem of category imbalance in semantic segmentation can be effectively solved.
Furthermore, a semi-supervised framework based on an iterative model continuously samples in each iteration, a retina picture without a label is input into the model, a plurality of sub-models are trained by combining with labeled data, and a pseudo label of the next iteration is generated by performing weighted integration on the sub-models. The iterative process enables the model to continuously learn information from the label-free data set, greatly improves the use of fundus retina image data, and obviously improves the segmentation performance of the fully supervised semantic segmentation model and the generalization capability of the model.
Further, step S3 is intended to accurately detect and locate the optic disc in the fundus retinal image, and to lay a foundation for further extracting the blood vessels around the optic disc for diameter calculation. The step adopts a YOLO v3 target detection method to detect and position the optic disc, and has high detection efficiency and detection accuracy.
Furthermore, the YOLO target detection algorithm predicts the target position coordinates, the confidence coefficient and the type at the same time, so that the YOLO target detection algorithm has a high detection speed.
Further, the step S4 is a blood vessel diameter calculation method based on the blood vessel perpendicular line, aiming at accurately calculating the diameter and the diameter ratio of the artery and vein blood vessels on the basis of the results of the retinal artery and vein segmentation and optic disc detection positioning. The blood vessel vertical line is calculated by positioning and sampling the barycentric coordinates of the small blood vessels, so that the error superposition caused by obtaining the blood vessel vertical line through blood vessel contour detection and blood vessel skeleton extraction is avoided, and the calculation has smaller error.
Furthermore, the ratio of the central artery equivalent to the central vein equivalent is adopted to calculate the ratio of the retinal arteriovenous vessel diameter, and the correlation between thick and thin vessels in retinal vessels is integrated, so that the calculation result is more accurate. In conclusion, the invention not only can realize accurate retina artery and vein vessel segmentation and optic disc detection and positioning, but also can effectively, accurately and automatically realize the calculation of the diameter of the retina blood vessel of the eye ground.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a flow chart of the calculation of the present invention;
FIG. 2 is a frame diagram of the retinal artery and vein vessel segmentation of the fundus oculi of the present invention;
FIG. 3 is a structure diagram of a VGG-Att-UNet network according to the present invention;
FIG. 4 is a diagram of an iterative semi-supervised learning framework of the present invention;
FIG. 5 is a diagram of a disc detection positioning network according to the present invention, YOLO v 3;
FIG. 6 is a flow chart of retinal artery and vein diameter calculation according to the present invention;
FIG. 7 is a test result chart of the method of the present invention, in which (a) is the result of arteriovenous vessel segmentation, (b) is the result chart of optic disc detection, (c) is a vertical line diagram, and (d) is a partial enlarged view drawn by a vertical line.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Various structural schematics according to the disclosed embodiments of the invention are shown in the drawings. The figures are not drawn to scale, wherein certain details are exaggerated and possibly omitted for clarity of presentation. The shapes of various regions, layers and their relative sizes and positional relationships shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, according to actual needs.
The invention provides a fundus retina arteriovenous vessel diameter calculation method, and relates to an image preprocessing method, an arteriovenous vessel semantic segmentation method based on semi-supervised learning, a disc detection method based on YOLO v3 and a vessel diameter calculation method based on a vessel perpendicular line, so that the fundus retina arteriovenous vessel diameter and the diameter ratio are accurately and automatically calculated, and the invention also provides a preprocessing method for eliminating different retina image background differences, a semi-supervised semantic segmentation method for improving model performance and a method for accurately detecting a disc based on YOLO v 3.
Referring to fig. 1, the method for calculating the diameter of the artery and vein of the retina of the eye of the present invention includes the following steps:
s1, fundus retina image preprocessing is used for eliminating background differences of the images from different databases;
s101, firstly, converting a color fundus retina image into an HSV color space according to the following formula;
Figure BDA0003069983500000091
wherein, dividing the values of R, G, B by 255 yields R ', G ', B ', changing the gray scale value range from [0,255] to [0,1], noting that Cmax ═ max (R ', G ', B '), Cmin ═ min (R ', G ', B ') are the maximum and minimum values of the three channels, respectively, and Δ ═ Cmax-Cmin is the difference between the maximum and minimum values.
S102, image denoising and contrast enhancement are realized by adopting median filtering and contrast-limiting adaptive histogram equalization in an HSV color space;
s103, by designing 1/10 with the size of the median filter as the width of the picture, filtering the result obtained in the step S102 by using the filter to obtain a retina background;
s104, using an image superposition technology, recording the result obtained in the step S102 as src1, recording the result obtained in the step S103 as src2, and performing image fusion by using a formula (2):
dst=α*src1+β*src2+γ (2)
setting the value of alpha to 5, the value of beta to-5 and the value of gamma to 128, namely obtaining the preprocessing result after eliminating the retina background.
S2, constructing an eye fundus retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, as shown in figure 2;
s201, uniformly processing arteriovenous marks in three different source data sets of RITE, IOSTAR and AVRDB, wherein the arteriovenous marks are uniformly red (255,0 and 0) in arterial pixels, blue (0,0 and 255) in venous pixels, green (0,255 and 0) in crossed pixels and black (0,0 and 0) in background pixels;
s202, utilizing the preprocessing method of the step S1 to preprocess 170 subretinal retinal images with arteriovenous labels in the step S201, dividing a training set, a verification set and a test set according to the ratio of 6:2:2, and randomly cutting, stretching and translating 102 training set images obtained by division to perform data amplification by 50 times;
s203, training a U-Net semantic segmentation model integrated with an Attention mechanism and a VGG structure based on the processed training set, and recording the model as a basic model, namely a VGG-Att-UNet model, as shown in FIG. 3.
The VGG-Att-UNet model adopts a Tverseky loss function for solving the problem of category imbalance in arteriovenous semantic segmentation, and the target function of the loss function is as follows:
Figure BDA0003069983500000111
a and B represent point sets contained in two contour regions, namely pixel point sets corresponding to a prediction region and an actual region, wherein beta is 0.7, alpha is 0.3, an AdaDelta adaptive learning rate adjustment optimization method is selected to optimize a cross entropy loss function, Batch Size is set to be 8, 50 rounds of training are repeated, cross validation is adopted for the validation set in each round to perform model validation, and finally a model with the highest precision on the validation set is selected to be stored.
And S204, based on the basic model, performing model training and updating on 1369 unlabeled fundus retina images randomly selected from SUSTech-SUSY and ORIGA databases by adopting iterative semi-supervised learning.
Referring to fig. 4, the implementation process of the iterative semi-supervised learning includes:
s2041, first iteration L0: the basic model is recorded as M0Then randomly sampling N from the unlabeled data0Is 685;
s2042, at LkN iterations, N samples are taken from the unlabeled data each timekN images, N, k 1kThe calculation is as follows:
Figure BDA0003069983500000112
training S in each iterationkSubmodel, set S08. Performing S on the generated pseudo labelkTraining with sub-random sampling and tagged data SkSubmodels, SkThe calculation is as follows:
Figure BDA0003069983500000113
wherein for k is more than 1;
s2043 and S obtained by training in last iteration in each iterationk-1Pair of submodel weighted combinations NkTesting the label-free data to obtain a pseudo label, and weighting each sub-modelwiThe calculation is as follows:
Figure BDA0003069983500000121
wherein the content of the first and second substances,
Figure BDA0003069983500000124
the segmentation probability map of the pixel level obtained by the ith sub-model on the unlabeled data in the kth iteration is shown, and the superscript j represents the image pixel containing all R pixels;
Figure BDA0003069983500000122
is the resulting probability sum of the ith sub-model over the kth unlabeled image. A higher weight indicates a stronger agreement of a single sub-model prediction with the combined prediction of all sub-model predictions.
S2044, repeating the iteration steps from the step S2042 to the step S2043 until SkAnd subtracting to 1, and stopping iteration to obtain an arteriovenous segmentation model.
And S205, evaluating the test set by adopting a pixel accuracy, an average cross-over ratio and a weight cross-over comparison model.
S3, constructing a fundus optic disc detection positioning model based on YOLO v3, as shown in FIG. 5;
s301, preprocessing retina images in the SUSTech-SYSU and ORIGA data sets by utilizing a preprocessing method of the step S1 based on the collected SUSTech-SYSU and ORIGA data sets;
s302, performing optic disc labeling on 650 fundus images in the ORIGA data set by using LabelImg software to obtain a corresponding xml labeling file;
s303, dividing 1369 fundus pictures in the database into a training set, a verification set and a test set in a ratio of 6:2: 2;
s304, training a YOLO v3 optic disc detection model on 821 training sets, wherein the YOLO v3 divides an input image into grids of S multiplied by S, and each grid is responsible for predicting a target falling into the center of the grid. Predicting the positions and confidence degrees of B bounding boxes taking the B bounding boxes as centers by each grid, wherein the position of one bounding box corresponds to four values (x, y, w and h) which are coordinate values of the centers of the bounding boxes and the widths and the lengths of the bounding boxes respectively; the confidence is used for measuring the accuracy of the target position prediction by the bounding box and is calculated as follows:
Figure BDA0003069983500000123
wherein Pr represents the probability that the bounding box contains the target to be predicted, Pr (object) is 1 if the bounding box contains the target, otherwise Pr (object) is 0, and IOU is the ratio between the union and the intersection between the real target bounding box and the predicted bounding box; assuming that C classes are predicted in total, a tensor with a model output of S × (5 × B + C) can be obtained.
The model adopts Adam optimization function to adjust model parameters, and the objective function of YOLO v3 is formula (9); and monitoring the model by adopting the loss on the verification set, and storing the model with the minimum loss in the verification set for application.
Figure BDA0003069983500000131
Wherein the content of the first and second substances,
Figure BDA0003069983500000132
indicating that the object is contained in grid i,
Figure BDA0003069983500000133
indicating that the prediction of the target at the jth bounding box in grid i is trusted. The first two terms of the loss function are used for predicting the position coordinates of the boundary box, the third term is used for predicting the confidence degree of the boundary box containing the target, the fourth term is used for predicting the confidence degree of the boundary box not containing the target, and the last term is used for predicting the category of the target.
And S305, performing model evaluation on 274 test sets by using the average intersection ratio.
S4, fundus arteriovenous vessel diameter calculation based on the vessel perpendicular line, as shown in fig. 6.
S401, testing the input color fundus retina image based on the arteriovenous segmentation model and the optic disc detection positioning model in the step S2 and the step S3 to obtain an arteriovenous blood vessel segmentation result and an optic disc detection result;
s402, extracting a 2-3 times optic disc radius area of the arteriovenous result obtained in the step S401 based on the detected optic disc;
s403, drawing a black circle with the pixel width of 0.15 times of the radius at intervals of 0.1 and a half radius from the 2 times of the radius of the optic disc to the extraction result in the step S402, and covering the extracted blood vessel to obtain a series of blood vessel sections;
s404, because of the existence of the impurity pixel points of the blood vessel bifurcation and the segmentation result, the existence of the situations can bring much interference to the diameter calculation, and the outline of the blood vessel section communication area is obtained by utilizing a findContours function in Opencv, so that the blood vessel interference with larger outline area and larger outline area is eliminated;
s405, based on the blood vessel contour extracted in the step S404, acquiring barycentric coordinates of each contour by using a moments function in Opencv, and marking the barycentric of each blood vessel section;
s406, based on the barycentric coordinates of the blood vessels extracted in the step S405, if (x)v1,yv2),(xv2,yv2) The barycentric coordinates of two blood vessel sections sampled for the same blood vessel respectively can determine the linear equation of the two barycenters as follows:
Figure BDA0003069983500000141
wherein the content of the first and second substances,
Figure BDA0003069983500000142
the slope of the line from which the equation for the perpendicular line passing through the midpoint of the two centers of gravity is calculated is as follows:
Figure BDA0003069983500000143
s407, calculating the number of pixels intersected between the vertical line and the blood vessel to obtain an approximate calculated value of the diameter of the blood vessel.
Adopting a calculation formula of the ratio of arteriovenous diameters: the ratio of Central arterial Equivalent (CRAE) to Central venous Equivalent (CRVE) is calculated as follows:
Figure BDA0003069983500000144
wherein, wi,wjVessel diameters of wide and narrow vessels, respectively, the arteriovenous diameter ratio was calculated as follows:
AVR=CRAE/CRVE (13)
and S408, verifying the accuracy by the error between the calculated result and the golden standard in the INSPIRE-AVR database.
In another embodiment of the present invention, a fundus retinal arteriovenous vessel diameter calculating system is provided, which can be used to implement the above fundus retinal arteriovenous vessel diameter calculating method, and specifically, the fundus retinal arteriovenous vessel diameter calculating system includes a data preprocessing module, an arteriovenous segmentation module, a optic disc detection and positioning module, and an arteriovenous vessel diameter calculating module.
The data preprocessing module is used for preprocessing the color fundus retina image;
the arteriovenous segmentation module is used for constructing an eye fundus retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and performing arteriovenous segmentation on the image preprocessed by the data preprocessing module to obtain an arteriovenous segmentation model;
the optic disc detection positioning module is used for preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set and constructing a fundus optic disc detection positioning model based on YOLO v 3;
and the arteriovenous vessel diameter calculation module is used for calculating the number of pixels intersected between the perpendicular line of the arteriovenous vessel and the vessel based on the arteriovenous segmentation model obtained by the arteriovenous segmentation module and the fundus optic disc detection positioning model constructed by the optic disc detection positioning module to obtain the arteriovenous vessel diameter, and calculating the vessel diameter by adopting the arteriovenous diameter ratio.
In yet another embodiment of the present invention, a terminal device is provided that includes a processor and a memory for storing a computer program comprising program instructions, the processor being configured to execute the program instructions stored by the computer storage medium. The Processor may be a Central Processing Unit (CPU), or may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable gate array (FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, etc., which is a computing core and a control core of the terminal, and is adapted to implement one or more instructions, and is specifically adapted to load and execute one or more instructions to implement a corresponding method flow or a corresponding function; the processor provided by the embodiment of the invention can be used for the operation of the fundus retina arteriovenous vessel diameter calculation method, and comprises the following steps:
preprocessing a color fundus retina image; constructing an eye fundus retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and performing arteriovenous segmentation on the preprocessed image to obtain an arteriovenous segmentation model; preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set, and constructing a fundus optic disc detection positioning model based on YOLO v 3; and calculating the number of pixels intersected between the artery and vein blood vessel vertical lines and the blood vessels based on the artery and vein segmentation model and the fundus optic disc detection positioning model to obtain the diameter of the artery and vein blood vessels, and calculating the diameter of the blood vessels by adopting the diameter ratio of the artery and vein.
In still another embodiment of the present invention, the present invention further provides a storage medium, specifically a computer-readable storage medium (Memory), which is a Memory device in a terminal device and is used for storing programs and data. It is understood that the computer readable storage medium herein may include a built-in storage medium in the terminal device, and may also include an extended storage medium supported by the terminal device. The computer-readable storage medium provides a storage space storing an operating system of the terminal. Also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor. It should be noted that the computer-readable storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory.
One or more instructions stored in the computer-readable storage medium can be loaded and executed by the processor to realize the corresponding steps of the method for calculating the diameter of the artery and vein of the fundus retina in the embodiment; one or more instructions in the computer-readable storage medium are loaded by the processor and perform the steps of:
preprocessing a color fundus retina image; constructing an eye fundus retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and performing arteriovenous segmentation on the preprocessed image to obtain an arteriovenous segmentation model; preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set, and constructing a fundus optic disc detection positioning model based on YOLO v 3; and calculating the number of pixels intersected between the artery and vein blood vessel vertical lines and the blood vessels based on the artery and vein segmentation model and the fundus optic disc detection positioning model to obtain the diameter of the artery and vein blood vessels, and calculating the diameter of the blood vessels by adopting the diameter ratio of the artery and vein.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention relates to an automatic calculation method for the diameter of an artery and vein of a retina, which takes a database INSPIRE-AVR as an example, the method is tested, the obtained arteriovenous vessel segmentation result, optic disc detection result and vessel perpendicular calculation result are shown in figure 7, and the calculated arteriovenous diameter ratio is shown in table 1.
TABLE 1 arteriovenous vessel diameter ratio calculated by the method on INSPIRE-AVR data set
Figure BDA0003069983500000171
Figure BDA0003069983500000181
The error between the result of the present invention and the gold standard was calculated, and the mean and standard deviation of the error are shown in table 2.
TABLE 2 average error Table between calculated results and standard values of the present invention
Figure BDA0003069983500000191
In summary, according to the method for calculating the diameter of the arteriovenous vessels of the retina, the ratio of the diameters of the obtained arteriovenous vessels of the retina to the diameter of the two published gold standards in the embodiment has a very small average error through the test in the embodiment, which indicates that the method for calculating the diameter of the arteriovenous vessels of the retina, which is obtained by the method, has high accuracy; the standard deviation of the obtained error is small, which shows that the calculation method of the invention is stable.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (10)

1. A method for calculating the diameter of an artery and vein of a retina is characterized by comprising the following steps:
s1, preprocessing the color fundus retina image;
s2, constructing an eye fundus retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and performing arteriovenous segmentation on the image preprocessed in the step S1 to obtain an arteriovenous segmentation model;
s3, preprocessing the retina images in the SUSTech-SYSU data set and the ORIGA data set, and constructing a fundus optic disc detection positioning model based on YOLO v 3;
s4, calculating the number of pixels of intersection between the artery and vein blood vessel vertical lines and the blood vessel based on the arteriovenous segmentation model obtained in the step S2 and the fundus optic disc detection positioning model constructed in the step S3 to obtain the artery and vein blood vessel diameter, and calculating the blood vessel diameter by adopting the arteriovenous diameter ratio.
2. The method according to claim 1, wherein the step S1 preprocessing specifically includes:
s101, converting the color fundus retina image into an HSV color space;
s102, image denoising and contrast enhancement are realized by adopting median filtering and contrast-limiting adaptive histogram equalization in an HSV color space;
s103, filtering the result obtained in the step S102 by using a filter to obtain a retina background by designing 1/10 with the size of the median filter as the width of the picture;
and S104, recording the result obtained in the step S102 as src1 and the result obtained in the step S103 as src2 by using an image superposition technology, and then performing image fusion to obtain a preprocessing result for eliminating the retinal background.
3. The method according to claim 1, wherein step S2 is specifically:
s201, different artery and vein label marks in three different source data sets of RITE, IOSTAR and AVRDB are processed in a unified mode, and the artery pixel is red (255,0,0), the vein pixel is blue (0,0,255), the crossed pixel is green (0,255,0) and the background pixel is black (0,0, 0);
s202, utilizing the preprocessing method of the step S1 to preprocess 170 subretinal retinal images with arteriovenous labels in the step S201, dividing a training set, a verification set and a test set according to the ratio of 6:2:2, and randomly cutting, stretching and translating 102 training set images obtained by division to perform data amplification by 50 times;
s203, training and testing a U-Net semantic segmentation model integrated with an Attention mechanism and a VGG structure based on the training set images processed in the steps S201 and S202 to obtain a basic model-VGG-Att-UNet model;
s204, based on the basic model, randomly selecting 1369 color fundus retina images from the SUSTech-SUSY and ORIGA public databases, and adopting an iterative semi-supervised learning frame to train and update the basic model to obtain an arteriovenous segmentation model based on semi-supervised learning: SVA-UNet model.
4. The method according to claim 3, wherein in step S203, the objective function of the loss function of the VGG-Att-UNet model is as follows:
Figure FDA0003069983490000021
a and B represent point sets contained in two contour regions, beta is 0.7, alpha is 0.3, an AdaDelta adaptive learning rate adjustment optimization method is selected to optimize a cross entropy loss function, Batch Size is set to be 8, 50 rounds of training are repeated, cross validation is performed on the validation set in each round, and finally a model with the highest precision on the validation set is selected to be stored.
5. The method according to claim 1, wherein step S204 specifically comprises:
s2041, first iteration L0: the basic model is recorded as M0Then randomly sampling N from the unlabeled data0Is 685;
s2042, at LkN iterations, N samples are taken from the unlabeled data each timekN images, k being 1.;
s2043 and S obtained by training in last iteration in each iterationk-1Pair of submodel weighted combinations NkTesting the label-free data to obtain a pseudo label, and calculating the weight w of each sub-modeli
S2044, repeating the iteration steps from the step S2042 to the step S2043 until SkAnd subtracting to 1, and stopping iteration to obtain an arteriovenous segmentation model.
6. The method according to claim 1, wherein step S3 is specifically:
s301, preprocessing the fundus retina image in the SUSTech-SYSU and ORIGA data sets by using the preprocessing method in the step S1 based on the collected SUSTech-SYSU and ORIGA data sets;
s302, performing optic disc labeling on the fundus image in the ORIGA data set by using LabelImg software to obtain a corresponding xml labeling file;
s303, randomly selecting 1369 fundus retinal images from the data preprocessed in the steps S301 and S302 and labeled with the optic discs, and dividing the fundus retinal images into a training set, a verification set and a test set according to the ratio of 6:2: 2;
s304, training a YOLO v3 optic disc detection model on the training set, dividing the input image into grids of S multiplied by S by the YOLO v3 optic disc detection model, wherein each grid is responsible for predicting a target falling into the center, and constructing and completing the fundus optic disc detection positioning model based on the YOLO v 3.
7. The method according to claim 6, wherein in step S304, each grid is predicted by the position and confidence of B bounding boxes in its center, and the position of one bounding box corresponds to four values (x, y, w, h), which are the coordinate value of the center of the bounding box and the width and length of the bounding box; the confidence coefficient is used for measuring the accuracy of the target position prediction by the bounding box, and is specifically calculated as follows:
Figure FDA0003069983490000031
wherein Pr represents the probability that the bounding box contains the target to be predicted, Pr (object) is 1 if the bounding box contains the target, otherwise Pr (object) is 0, and IOU is the ratio between the union and the intersection between the real target bounding box and the predicted bounding box; c classes are predicted together, resulting in a tensor with the YOLO v3 model output of S × (5 × B + C).
8. The method according to claim 1, wherein step S4 is specifically:
s401, testing an input color fundus retina image based on an arteriovenous segmentation and optic disc detection model to obtain an arteriovenous vessel segmentation result and an optic disc detection result;
s402, extracting a 2-3 times optic disc radius area of the arteriovenous result obtained in the step S401 based on the optic disc detected in the step S401;
s403, drawing a black circle with the pixel width of 0.15 times of the radius at intervals of 0.1 and half diameters from the 2 times of the radius of the optic disc to the extraction result of the step S402, and covering the extracted blood vessel to obtain a series of blood vessel sections;
s404, acquiring the contour of the blood vessel section communication area obtained in the step S403 by using a findContours function in Opencv;
s405, based on the contour of the blood vessel communication area extracted in the step S404, acquiring barycentric coordinates of each contour by using a moments function in Opencv, and marking the barycentric of each blood vessel section;
s406, based on the barycentric coordinates of the blood vessels extracted in step S405, let (x)v1,yv2),(xv2,yv2) The gravity center coordinates of two blood vessel sections sampled for the same blood vessel respectively;
s407, calculating the number of pixels intersected between the vertical line and the blood vessel to obtain an approximate calculated value of the diameter of the blood vessel.
9. The method of claim 8, wherein in step S407, the arteriovenous diameter ratio is calculated as follows:
AVR=CRAE/CRVE
wherein CRAE is the central artery equivalent and CRVE is the central vein equivalent.
10. A fundus retinal arteriovenous vessel diameter calculation system, comprising:
the data preprocessing module is used for preprocessing the color fundus retina image;
the arteriovenous segmentation module is used for constructing an eye fundus retina arteriovenous semantic segmentation model based on iterative semi-supervised learning, and performing arteriovenous segmentation on the image preprocessed by the data preprocessing module to obtain an arteriovenous segmentation model;
the optic disc detection positioning module is used for preprocessing retina images in the SUSTech-SYSU data set and the ORIGA data set and constructing a fundus optic disc detection positioning model based on YOLO v 3;
and the arteriovenous vessel diameter calculation module is used for calculating the number of pixels intersected between the perpendicular line of the arteriovenous vessel and the vessel based on the arteriovenous segmentation model obtained by the arteriovenous segmentation module and the fundus optic disc detection positioning model constructed by the optic disc detection positioning module to obtain the arteriovenous vessel diameter, and calculating the vessel diameter by adopting the arteriovenous diameter ratio.
CN202110536793.9A 2021-05-17 2021-05-17 Fundus retina artery and vein vessel diameter calculation method and system Active CN113269737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110536793.9A CN113269737B (en) 2021-05-17 2021-05-17 Fundus retina artery and vein vessel diameter calculation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110536793.9A CN113269737B (en) 2021-05-17 2021-05-17 Fundus retina artery and vein vessel diameter calculation method and system

Publications (2)

Publication Number Publication Date
CN113269737A true CN113269737A (en) 2021-08-17
CN113269737B CN113269737B (en) 2024-03-19

Family

ID=77231335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110536793.9A Active CN113269737B (en) 2021-05-17 2021-05-17 Fundus retina artery and vein vessel diameter calculation method and system

Country Status (1)

Country Link
CN (1) CN113269737B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792740A (en) * 2021-09-16 2021-12-14 平安科技(深圳)有限公司 Arteriovenous segmentation method, system, equipment and medium for fundus color photography
CN115511883A (en) * 2022-11-10 2022-12-23 北京鹰瞳科技发展股份有限公司 Method, apparatus and storage medium for determining curvature of retinal fundus blood vessel
WO2023240319A1 (en) * 2022-06-16 2023-12-21 Eyetelligence Limited Fundus image analysis system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103006175A (en) * 2012-11-14 2013-04-03 天津工业大学 Method for positioning optic disk for eye fundus image on basis of PC (Phase Congruency)
EP3247257A1 (en) * 2015-01-19 2017-11-29 Statumanu ICP ApS Method and apparatus for non-invasive assessment of intracranial pressure
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
US20180146370A1 (en) * 2016-11-22 2018-05-24 Ashok Krishnaswamy Method and apparatus for secured authentication using voice biometrics and watermarking
US20190014982A1 (en) * 2017-07-12 2019-01-17 iHealthScreen Inc. Automated blood vessel feature detection and quantification for retinal image grading and disease screening
CN109685770A (en) * 2018-12-05 2019-04-26 合肥奥比斯科技有限公司 Retinal vessel curvature determines method
CN111000563A (en) * 2019-11-22 2020-04-14 北京理工大学 Automatic measuring method and device for retinal artery and vein diameter ratio
CN111222361A (en) * 2018-11-23 2020-06-02 福州依影健康科技有限公司 Method and system for analyzing hypertension retina vascular change characteristic data
CN111242933A (en) * 2020-01-15 2020-06-05 华南理工大学 Retina image artery and vein classification device, equipment and storage medium
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN111681242A (en) * 2020-08-14 2020-09-18 北京至真互联网技术有限公司 Retinal vessel arteriovenous distinguishing method, device and equipment
CN112075922A (en) * 2020-10-14 2020-12-15 中国人民解放军空军军医大学 Method for measuring fundus image indexes of type 2 diabetes mellitus and analyzing correlation between fundus image indexes and diabetic nephropathy
CN112716446A (en) * 2020-12-28 2021-04-30 深圳硅基智能科技有限公司 Method and system for measuring pathological change characteristics of hypertensive retinopathy
US20210133955A1 (en) * 2019-10-30 2021-05-06 Nikon Corporation Image processing method, image processing device, and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103006175A (en) * 2012-11-14 2013-04-03 天津工业大学 Method for positioning optic disk for eye fundus image on basis of PC (Phase Congruency)
EP3247257A1 (en) * 2015-01-19 2017-11-29 Statumanu ICP ApS Method and apparatus for non-invasive assessment of intracranial pressure
US20170367598A1 (en) * 2015-01-19 2017-12-28 Statumanu Icp Aps Method and apparatus for non-invasive assessment of intracranial pressure
US20180146370A1 (en) * 2016-11-22 2018-05-24 Ashok Krishnaswamy Method and apparatus for secured authentication using voice biometrics and watermarking
US20190014982A1 (en) * 2017-07-12 2019-01-17 iHealthScreen Inc. Automated blood vessel feature detection and quantification for retinal image grading and disease screening
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN111222361A (en) * 2018-11-23 2020-06-02 福州依影健康科技有限公司 Method and system for analyzing hypertension retina vascular change characteristic data
CN109685770A (en) * 2018-12-05 2019-04-26 合肥奥比斯科技有限公司 Retinal vessel curvature determines method
US20210133955A1 (en) * 2019-10-30 2021-05-06 Nikon Corporation Image processing method, image processing device, and storage medium
CN111000563A (en) * 2019-11-22 2020-04-14 北京理工大学 Automatic measuring method and device for retinal artery and vein diameter ratio
CN111242933A (en) * 2020-01-15 2020-06-05 华南理工大学 Retina image artery and vein classification device, equipment and storage medium
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN111681242A (en) * 2020-08-14 2020-09-18 北京至真互联网技术有限公司 Retinal vessel arteriovenous distinguishing method, device and equipment
CN112075922A (en) * 2020-10-14 2020-12-15 中国人民解放军空军军医大学 Method for measuring fundus image indexes of type 2 diabetes mellitus and analyzing correlation between fundus image indexes and diabetic nephropathy
CN112716446A (en) * 2020-12-28 2021-04-30 深圳硅基智能科技有限公司 Method and system for measuring pathological change characteristics of hypertensive retinopathy

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GUEDRI HICHEM: "Generation of new blood vessels in the human retina with L-system fractal construction", 2012 6TH INTERNATIONAL CONFERENCE ON SCIENCES OF ELECTRONICS, TECHNOLOGIES OF INFORMATION AND TELECOMMUNICATIONS (SETIT) *
薛岚燕;曹新容;林嘉雯;郑绍华;余轮;: "动静脉血管自动分类方法及其管径测量", 仪器仪表学报, no. 09 *
郭倩茹;徐应军;: "视网膜微血管直径测量及与糖尿病关系的研究现状", 中国煤炭工业医学杂志, no. 12 *
高颖琪;郭松;李宁;王恺;康宏;李涛;: "语义融合眼底图像动静脉分类方法", 中国图象图形学报, no. 10 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792740A (en) * 2021-09-16 2021-12-14 平安科技(深圳)有限公司 Arteriovenous segmentation method, system, equipment and medium for fundus color photography
CN113792740B (en) * 2021-09-16 2023-12-26 平安创科科技(北京)有限公司 Artery and vein segmentation method, system, equipment and medium for fundus color illumination
WO2023240319A1 (en) * 2022-06-16 2023-12-21 Eyetelligence Limited Fundus image analysis system
CN115511883A (en) * 2022-11-10 2022-12-23 北京鹰瞳科技发展股份有限公司 Method, apparatus and storage medium for determining curvature of retinal fundus blood vessel

Also Published As

Publication number Publication date
CN113269737B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN113269737B (en) Fundus retina artery and vein vessel diameter calculation method and system
US11487995B2 (en) Method and apparatus for determining image quality
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
CN113344849B (en) Microemulsion head detection system based on YOLOv5
CN107657612A (en) Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN112529839B (en) Method and system for extracting carotid vessel centerline in nuclear magnetic resonance image
US20220383661A1 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN111539480A (en) Multi-class medical image identification method and equipment
US20220406090A1 (en) Face parsing method and related devices
CN109583364A (en) Image-recognizing method and equipment
CN115953393B (en) Intracranial aneurysm detection system, device and storage medium based on multitask learning
CN110008853A (en) Pedestrian detection network and model training method, detection method, medium, equipment
CN113706579A (en) Prawn multi-target tracking system and method based on industrial culture
CN110458096A (en) A kind of extensive commodity recognition method based on deep learning
CN114639102B (en) Cell segmentation method and device based on key point and size regression
CN113469099A (en) Training method, detection method, device, equipment and medium of target detection model
CN110472673B (en) Parameter adjustment method, fundus image processing device, fundus image processing medium and fundus image processing apparatus
CN111127400A (en) Method and device for detecting breast lesions
CN114511898A (en) Pain recognition method and device, storage medium and electronic equipment
CN117253071B (en) Semi-supervised target detection method and system based on multistage pseudo tag enhancement
CN113096080A (en) Image analysis method and system
CN111260619A (en) Tongue body automatic segmentation method based on U-net model
CN116468702A (en) Chloasma assessment method, device, electronic equipment and computer readable storage medium
CN109948541A (en) A kind of facial emotion recognition methods and system
CN113034451A (en) Chest DR image identification method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Guo Peihong

Inventor after: Zhang Dalei

Inventor after: Zu Jian

Inventor after: Hu Na

Inventor before: Zu Jian

Inventor before: Guo Peihong

Inventor before: Hu Na

Inventor before: Zhang Dalei

CB03 Change of inventor or designer information
TA01 Transfer of patent application right

Effective date of registration: 20210924

Address after: Room 21, floor 4, building 2, yard a 2, North West Third Ring Road, Haidian District, Beijing 100048

Applicant after: Beijing Yingtong Technology Development Co.,Ltd.

Address before: 710049 No. 28 West Xianning Road, Shaanxi, Xi'an

Applicant before: XI'AN JIAOTONG University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant