WO2019196268A1 - 基于深度学习的糖尿病视网膜图像分类方法及系统 - Google Patents

基于深度学习的糖尿病视网膜图像分类方法及系统 Download PDF

Info

Publication number
WO2019196268A1
WO2019196268A1 PCT/CN2018/098390 CN2018098390W WO2019196268A1 WO 2019196268 A1 WO2019196268 A1 WO 2019196268A1 CN 2018098390 W CN2018098390 W CN 2018098390W WO 2019196268 A1 WO2019196268 A1 WO 2019196268A1
Authority
WO
WIPO (PCT)
Prior art keywords
lesion
image
model
training
recognition model
Prior art date
Application number
PCT/CN2018/098390
Other languages
English (en)
French (fr)
Inventor
吕绍林
zhu'jiang
崔宗会
wang'qia
陈瑞侠
Original Assignee
博众精工科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 博众精工科技股份有限公司 filed Critical 博众精工科技股份有限公司
Priority to EP18914825.7A priority Critical patent/EP3779786A4/en
Publication of WO2019196268A1 publication Critical patent/WO2019196268A1/zh
Priority to US16/835,199 priority patent/US11132799B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14532Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring glucose, e.g. by tissue impedance measurement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • G06F18/256Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the invention relates to the field of artificial intelligence, and is a method and system for classifying diabetic retina images based on deep learning.
  • Diabetic retinopathy is one of the serious complications of diabetes. It is the most important cause of blindness in the 20-65 year old population, which not only causes great social and patient families. The harm and burden, and the quality of life of diabetic patients is greatly reduced.
  • the invention aims at the above-mentioned deficiencies of the prior art, and proposes a method and system for classifying diabetic retina images based on deep learning, which reduces the requirement for the description ability of the network model, makes the model easy to train, and can treat the lesions for different lesions.
  • the area of the stove is positioned to facilitate the doctor to conduct clinical screening.
  • the invention relates to a method for classifying diabetic retina images based on deep learning, comprising:
  • the same fundus image to be identified is introduced into the microangioma lesion recognition model, the hemorrhage lesion recognition model and the exudation lesion recognition model for identification; the lesion feature information is extracted according to the recognition result, and the extracted lesion is extracted by the trained support vector machine classifier. Classification of feature information, obtaining a classification result of the lesion level corresponding to the fundus image;
  • the microangioma lesion recognition model extracts the candidate region of the microangioma lesion in the fundus image, and marks the microangioma lesion region and the non-microangioma lesion region, and then inputs the CNN model for training;
  • the hemorrhagic lesion recognition model is obtained by labeling the hemorrhagic lesion region and the non-bleeding lesion region in the fundus image, and then inputting the FCN model for training;
  • the exudation lesion recognition model is obtained by labeling the exuded lesion area and the non-exuding lesion area in the fundus image, and then inputting the FCN model for training.
  • the microangioma lesion recognition model is trained based on the CNN model and includes the following steps:
  • image preprocessing extracting the green channel image, using the r-polynomial transform to perform image grayscale correction on the green channel image, and then using Gaussian filtering to perform denoising processing to obtain the corrected image I'W ;
  • candidate candidate region of microangioma lesions I candidate extraction randomly select a pixel on the corrected image I' W , and use the pixel as a reference to generate linear structural elements of different scales in the angle ⁇ , using the generated linear structure
  • A3 data annotation the lesions and non-lesion annotations are performed on the results of the I candidate segmentation of the candidate region of the microangioma lesion, and a training set is generated;
  • model training input training set to CNN model for training, get microangioma lesion recognition model.
  • the hemorrhagic lesion recognition model and the exudative lesion recognition model are both trained based on the FCN model, and include the following steps:
  • the annotation of the fundus image also generates a test set, and the test set test is performed on the trained model to evaluate the recognition ability of the trained model.
  • the annotation of the fundus image also generates a verification set, and the verification set is used in the model training to prevent the network from over-fitting.
  • the image processing includes the following steps:
  • the support vector machine classifier is obtained by classifying the lesion feature information corresponding to the training set.
  • the lesion characteristic information includes, but is not limited to, the number, area, shape, gray scale, roundness, and aspect ratio of the lesion area.
  • a system for classifying diabetic retinal images based on the above methods comprising:
  • the microangioma recognition module is configured to identify the image to be examined using the microangioma lesion recognition model, label the microangioma lesion in the image to be examined, and obtain corresponding lesion characteristic parameters;
  • the bleeding identification module is configured to identify the image to be inspected by using the hemorrhagic lesion recognition model, and segment the identified hemorrhagic lesion region and obtain corresponding lesion characteristic parameters;
  • the exudation recognition module is configured to identify the image to be inspected by using the exudation lesion recognition model, and segment the identified exuded lesion region and obtain corresponding lesion characteristic parameters;
  • a classification module configured to classify feature parameters of each lesion region obtained by identifying the image to be inspected to obtain a classification result of the lesion level of the image to be inspected.
  • the invention is based on deep learning to identify microangioma, hemorrhage and exudation lesions respectively, and can automatically mark the position and size of the lesion area, and reduces the diabetic retinopathy recognition system compared with the traditional artificial extraction feature combined with image processing method. Difficulty in development; since different neural network models are adopted for different lesions, the preserved model has higher accuracy and greater applicability for specific lesion identification, and comprehensive microvasculature, hemorrhage, and exudation Classification of features, with higher classification accuracy, can more effectively assist doctors in clinical screening.
  • Embodiment 1 is a flow chart of a method in Embodiment 1;
  • FIG. 2 is a diagram showing the extraction effect of the candidate region of the microangioma in the first embodiment
  • Figure 3 is a diagram showing the area of the microangioma lesion in Example 1;
  • Figure 4 is a diagram showing the location of a bleeding lesion in Example 1;
  • Fig. 5 is a view showing the marked portion of the exuded lesion in Example 1.
  • the embodiment relates to a method for classifying diabetic retina images based on deep learning, including:
  • the same fundus image to be identified is introduced into the microangioma lesion recognition model, the hemorrhage lesion recognition model and the exudation lesion recognition model for identification; the lesion feature information is extracted according to the recognition result, and the extracted lesion is extracted by the trained support vector machine classifier. Classification of feature information, obtaining a classification result of the lesion level corresponding to the fundus image;
  • the microangioma lesion recognition model extracts the candidate region of the microangioma lesion in the fundus image, and marks the microangioma lesion region and the non-microangioma lesion region, and then inputs the CNN model for training;
  • the hemorrhagic lesion recognition model is obtained by labeling the hemorrhagic lesion region and the non-bleeding lesion region in the fundus image, and then inputting the FCN model for training;
  • the exudation lesion recognition model is obtained by labeling the exuded lesion area and the non-exuding lesion area in the fundus image, and then inputting the FCN model for training.
  • the microangioma lesion recognition model is trained based on the CNN model and includes the following steps:
  • image preprocessing extracting the green channel image, using the r-polynomial transform to perform image grayscale correction on the green channel image, and then performing Gaussian filtering on the denoising process to obtain the corrected image I′ W ; the r-polynomial transform, ie
  • microangioma lesion candidate region I candidate extraction randomly select a pixel on the corrected image I' W , with the pixel as a reference, in steps of 10 ° ⁇ 25 °, preferably, in steps of 15 °
  • the linear structural elements of different scales are generated, and the corrected linear image elements are used to perform morphological processing on the corrected image I′ W to obtain the response results of the linear structural elements of different scales, and the minimum response result I closed corresponding to each pixel point is retained, and I is obtained.
  • Candidate I closed -I' W
  • mixed threshold segmentation extraction for I candidate the extraction effect is shown in Figure 2;
  • the conditions for the hybrid threshold segmentation extraction are:
  • K is a constant, representing the maximum number of candidate regions of the microangioma lesion in the morphological processing, preferably, the value is 120;
  • CC represents a function of the number of statistical lesion candidate regions;
  • t l is the set minimum threshold
  • t u is the set maximum threshold
  • t k is the threshold that satisfies the CC condition
  • t s is the threshold that gradually increases by 0.002 steps
  • the minimum value of t s from I candidate is increased to the maximum gray value of I candidate according to the minimum gray interval until the number of functions CC statistics satisfies the condition of the above formula, and the micro blood vessel is extracted by using the threshold t K binarization I candidate Binary map of candidate areas of tumor lesions;
  • A3 data annotation the lesions and non-lesion annotations are performed on the results of the I candidate segmentation of the candidate region of the microangioma lesion, and a training set is generated;
  • model training input training set to CNN model for training, get microangioma lesion recognition model.
  • the hemorrhagic lesion recognition model and the exudative lesion recognition model are both trained based on the FCN model, and include the following steps:
  • the DICE cost function is:
  • the annotation of the fundus image also generates a test set, and the test set test is performed on the trained model to evaluate the recognition ability of the trained model.
  • the labeling of the fundus image also generates a verification set, and the verification set is used for correction in the model training, and the network parameters can be adjusted by the correction to prevent the network from over-fitting; the network structure can be determined through the verification set, and the complexity of the model can be controlled; Different verification sets, the results obtained after the test set input are different, and the optimal model that meets our needs can be selected according to the situation.
  • Using the FCN model to make blood and exudation identification can also adjust the proportion of lesion samples, over-test samples and missed samples in the training samples according to the distribution of actual data, and improve the accuracy and generalization ability of the model without redesigning the algorithm. Reduce the strength of algorithm development and improve the efficiency of algorithm development.
  • the area of the hemorrhagic lesion area is generally large, it is suitable to use the trained bleeding lesion recognition model to perform segmentation of the bleeding region in the image.
  • the image processing algorithm we can obtain the corresponding segmentation result label of the lesion area and the non-lesion area, and obtain the hemorrhage lesion recognition model.
  • the training a total of 1000 training samples with bleeding are marked.
  • 400 training samples without bleeding were added for training.
  • the model has a specificity of 89% for retinal hemorrhage, 89% for DR2 data, and 100% for DR3 data.
  • the recognition effect of bleeding is shown in Figure 4.
  • exudation lesion recognition model Since exudation is significantly different in morphology and color from other normal fundus structures, it is less difficult to identify than bleeding, so a good recognition result can be obtained by using the exudation lesion recognition model.
  • image processing algorithm we can obtain the corresponding segmentation result label of the lesion area and the non-lesion area, and obtain the exudation lesion recognition model.
  • a total of 800 training samples with exudation are marked.
  • 300 non-exuding training samples and 100 samples with similar exudation (neural fiber layer, drusen) were added for training.
  • the resulting model has a sensitivity of 86% and a specificity of 87% for determining whether the image has exudation.
  • the recognition effect of oozing is shown in Fig. 5.
  • the recognition result of each lesion is unlikely to be 100% accurate, staging according to the clinical staging criteria according to the recognition result of each lesion will result in lower specificity of the photo diagnosis result, so we use the recognition results of the three lesions as The feature training support vector machine classifier is used to judge the final diagnosis result of the picture.
  • the sensitivity to DR3 recognition is over 99%, the sensitivity of DR2 recognition is 85%, the sensitivity of DR1 recognition is 80%, and the specificity is 80%.
  • the support vector machine classifier combines the microangioma lesion recognition model, the hemorrhage lesion recognition model and the exudation lesion recognition model to identify the lesion feature information, and inputs the lesion feature information into the SVM classifier for training.
  • the lesion characteristic information includes, but is not limited to, the number, area, shape, gray scale, roundness, and aspect ratio of the lesion area.
  • a system for classifying diabetic retinal images based on the above methods comprising:
  • the microangioma recognition module is configured to identify the image to be examined using the microangioma lesion recognition model, label the microangioma lesion in the image to be examined, and obtain corresponding lesion characteristic parameters;
  • the bleeding identification module is configured to identify the image to be inspected by using the hemorrhagic lesion recognition model, and segment the identified hemorrhagic lesion region and obtain corresponding lesion characteristic parameters;
  • the exudation recognition module is configured to identify the image to be inspected by using the exudation lesion recognition model, and segment the identified exuded lesion region and obtain corresponding lesion characteristic parameters;
  • a classification module configured to classify feature parameters of each lesion region obtained by identifying the image to be inspected to obtain a classification result of the lesion level of the image to be inspected.
  • the embodiment of the invention has scalability, and currently includes three typical recognition models of diabetic fundus lesions; with the pathological changes of the disease and the need for detection, we can train the recognition model of the corresponding lesion according to the deep learning technique, and increase the recognition module of the corresponding lesion. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Emergency Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biodiversity & Conservation Biology (AREA)

Abstract

一种人工智能技术领域基于深度学习的糖尿病视网膜图像分类方法及系统,包括:获取眼底图像;将同一眼底图像导入微血管瘤病变识别模型、出血病变识别模型和渗出病变识别模型中进行识别;根据识别结果提取病变特征信息,再采用经训练的SVM分类器对提取的病变特征信息分类,获得分类结果;微血管瘤病变识别模型通过提取眼底图像中微血管瘤病变候选区域再输入CNN模型进行训练得到;出血病变识别模型、渗出病变识别模型分别通过对眼底图像中出血病变区域、渗出病变区域进行标注,再输入FCN模型进行训练得到。降低了对网络模型描述能力的要求,使得模型容易训练,并且能够针对不同的病变对病变灶区域进行定位勾画,方便医生进行临床筛查。

Description

基于深度学习的糖尿病视网膜图像分类方法及系统 技术领域
本发明涉及人工智能领域,是一种基于深度学习的糖尿病视网膜图像分类方法及系统。
背景技术
我国糖尿病患者基数庞大并成逐年上升的趋势,糖尿病性视网膜病变是糖尿病的严重并发症之一,是20~65岁人群中最主要的致盲原因,不仅给社会和患者的家庭造成了极大的危害和负担,而且使得糖尿病患者的生存质量大大下降。
由于糖尿病性视网膜病变所导致的失明是可预防的,故早期发现、早期干预是防止糖尿病致盲最有效的手段。但是糖尿病引起的视网膜病变早期,病人基本没有不适感,不进行筛查很容易被忽视延误治疗,造成对视力不可逆转的损伤。
目前深度学习被广泛应用于医学图像处理,可以极大的提高医生临床筛查的效率。然而目前成熟的深度学习模型都采用监督学习模式,但是大量高质量标注的医学影像数据很难获取,导致深度学习训练中使用的医学图像一般都滞后于自然图像;特别是在糖尿病性视网膜病变中,患者的视网膜眼底图像复杂,常出现多种病变共存的状态,在有限素材限制下,难以提高检测效率。因此如何用规模较小的训练集得到泛化能力较强的模型是深度学习在医学领域应用迫切需要解决的问题。
发明内容
本发明针对现有技术存在的上述不足,提出了一种基于深度学习的糖尿病视网膜图像分类方法及系统,降低了对网络模型描述能力的要求,使得模型容易训练,并且能够针对不同的病变对病变灶区域进行定位勾画,方便医生进行临床筛查。
本发明是通过以下技术方案实现的:
本发明涉及一种基于深度学习的糖尿病视网膜图像分类方法,包括:
获取待识别眼底图像;
将同一待识别眼底图像分别导入微血管瘤病变识别模型、出血病变识别模型和渗出病变识别模型中进行识别;根据识别结果提取病变特征信息,再采用经训练的支持向量机分类器对提取的病变特征信息分类,获得眼底图像对应的病变等级分类结果;
所述微血管瘤病变识别模型通过提取眼底图像中微血管瘤病变候选区域,进行微血管瘤病变区域和非微血管瘤病变区域标注,再输入CNN模型进行训练得到;
所述出血病变识别模型通过对眼底图像中出血病变区域和非出血病变区域进行标注,再输入FCN模型进行训练得到;
所述渗出病变识别模型通过对眼底图像中渗出病变区域和非渗出病变区域进行标注,再输入FCN模型进行训练得到。
所述微血管瘤病变识别模型基于CNN模型训练得到,包括以下步骤:
A1,图像预处理:提取绿色通道图像,对绿色通道图像使用r-polynomial变换进行图像灰度矫正,再利用高斯滤波进行去噪处理,得到矫正图像I′ W
A2,微血管瘤病变候选区域I candidate提取:在矫正图像I′ W上随机选取一个像素点,以该像素点作为基准,以角度α为步长生成不同尺度的线性结构元素,利用生成的线性结构元素对矫正图像I′ W进行形态学处理,得到不同尺度线性结构元素的响应结果,保留每个像素点对应的最小响应结果I closed,得到I candidate=I closed-I′ W,对I candidate进行混合阈值分割提取;
A3,数据标注:对微血管瘤病变候选区域I candidate分割提取的结果进行病变和非病变标注,生成训练集;
A4,模型训练:输入训练集至CNN模型进行训练,得到微血管瘤病变识别模型。
所述出血病变识别模型和渗出病变识别模型均基于FCN模型训练得到,包括以下步骤:
B1,通过图像处理对眼底图像进行病变区域和非病变区域标注,生成训练集;
B2,使用U-net网络结构构建FCN模型,每次随机取训练集中部分已标注数据进行训练,得到训练后的病变识别模型;训练采用的代价函数是DICE。
所述对眼底图像的标注还生成测试集,对已训练模型采用测试集测试,评估已训练模型的识别能力。
所述对眼底图像的标注还生成验证集,在模型训练中采用验证集进行修正,防止网络过拟合。
所述图像处理包括以下步骤:
C1,从图像中提取眼底区域;
C2,使用中值滤波对提取的眼底区域进行图像增强,对增强后的结果进行灰度归一化处理;
C3,对归一化处理的结果进行阈值分割,然后使用面积特征筛选出病变候选区域。
所述支持向量机分类器通过对训练集对应的病变特征信息进行分类训练得到。
所述病变特征信息包括但不限于病变区域的个数、面积、形状、灰度、圆度和横纵比。
一种基于上述方法对糖尿病视网膜图像分类的系统,包括:
微血管瘤识别模块,用于使用微血管瘤病变识别模型对待检图像进行识别,标注待检图像中的微血管瘤病变部位并获取相应的病变特征参数;
出血识别模块,用于使用出血病变识别模型对待检图像进行识别,对识别得到的出血病变区域进行分割并获取相应的病变特征参数;
渗出识别模块,用于使用渗出病变识别模型对待检图像进行识别,对识别得到的渗出病变区域进行分割并获取相应的病变特征参数;
分类模块,用于对识别待检图像得到的各病变区域的特征参数进行分类以获取待检图像病变等级分类结果。
技术效果
本发明基于深度学习分别对微血管瘤、出血、渗出病变进行识别,并能自动标注出病变区域位置和大小,相对于传统人为提取特征结合图像处理的方法,减小了糖尿病视网膜病变识别系统的开发难度;由于针对不同的病变采取了不同的神经网络模型,因此保存的模型针对特定的病变识别具有更高的精度和更强的适用性,综合微血管瘤、出血、渗出三种病变的多个特征进行分类,具有更高的分类准确率,可以更加有效地辅助医生进行临床筛查工作。
附图说明
图1为实施例1中方法流程图;
图2为实施例1中微血管瘤候选区域提取效果图;
图3为实施例1中微血管瘤病变区域标记图;
图4为实施例1中出血病变部位标记图;
图5为实施例1中渗出病变部位标记图。
具体实施方式
下面结合附图及具体实施方式对本发明进行详细描述。
实施例1
如图1所示,本实施例涉及一种基于深度学习的糖尿病视网膜图像分类方法,包括:
获取待识别眼底图像;
将同一待识别眼底图像分别导入微血管瘤病变识别模型、出血病变识别模型和渗出病变识别模型中进行识别;根据识别结果提取病变特征信息,再采用经训练的支持向量机分类器对提取的病变特征信息分类,获得眼底图像对应的病变等级分类结果;
所述微血管瘤病变识别模型通过提取眼底图像中微血管瘤病变候选区域,进行微血管瘤病变区域和非微血管瘤病变区域标注,再输入CNN模型进行训练得到;
所述出血病变识别模型通过对眼底图像中出血病变区域和非出血病变区域进行标注,再输入FCN模型进行训练得到;
所述渗出病变识别模型通过对眼底图像中渗出病变区域和非渗出病变区域进行标注,再输入FCN模型进行训练得到。
所述微血管瘤病变识别模型基于CNN模型训练得到,包括以下步骤:
A1,图像预处理:提取绿色通道图像,对绿色通道图像使用r-polynomial变换进行图像灰度矫正,再利用高斯滤波进行去噪处理,得到矫正图像I′ W;所述r-polynomial变换,即
Figure PCTCN2018098390-appb-000001
其中,r是多项式的幂,取值为2;μ min是灰度最小值,取值为0;μ max是灰度最大值,取值为1;G是提取的绿色通道图像;μ W(i,j)是绿色通道图像以(i,j)为中心、半径为W的邻域内灰度均值;I W是利用r-polynomial变换得到灰度均衡化图像;
A2,微血管瘤病变候选区域I candidate提取:在矫正图像I′ W上随机选取一个像素点,以该像素点作为基准,以10°~25°为步长,优选地,以15°为步长生成不同尺度的线性结构元素,利用生成的线性结构元素对矫正图像I′ W进行形态学处理,得到不同尺度线性结构元素的响应结果,保留每个像素点对应的最小响应结果I closed,得到I candidate=I closed-I′ W,对I candidate进行混合阈值分割提取,提取效果如图2所示;
所述混合阈值分割提取的条件为:
Figure PCTCN2018098390-appb-000002
其中,K为常数,代表形态学处理中微血管瘤病变候选区域的最大个数,优选地,取值为120;CC代表统计病变候选区域个数的函数;
t l是设置的最小阈值,t u是设置的最大阈值,t k为满足CC条件的阈值,t s是以0.002步长逐渐增长的阈值;
将t s从I candidate的最小值按照最小灰度间隔增加到I candidate的最大灰度值,直到函数CC统计的个数满足上式的条件为止,使用阈值t K二值化I candidate提取得到微血管瘤病变候选区域的二值图;
A3,数据标注:对微血管瘤病变候选区域I candidate分割提取的结果进行病变和非病变标注,生成训练集;
A4,模型训练:输入训练集至CNN模型进行训练,得到微血管瘤病变识别模型。
在此我们使用了400张含有微血管瘤的眼底图像进行训练,训练得到的模型对分割部位是否为微血管瘤分类的敏感性、特异性均达到90%;最终得到的微血管瘤病变区域如图3所示。
所述出血病变识别模型和渗出病变识别模型均基于FCN模型训练得到,包括以下步骤:
B1,通过图像处理对眼底图像进行病变区域和非病变区域标注,生成训练集;
B2,使用U-net网络结构构建FCN模型,每次随机取训练集中部分已标注数据进行训练,得到训练后的病变识别模型;训练采用的代价函数是DICE。
所述DICE代价函数为:
Figure PCTCN2018098390-appb-000003
其中,X是标签图,Y是结果图。
所述对眼底图像的标注还生成测试集,对已训练模型采用测试集测试,评估已训练模型的识别能力。
所述对眼底图像的标注还生成验证集,在模型训练中采用验证集进行修正,通过修正可以调整网络参数,防止网络过拟合;还可以通过验证集确定网络结构,控制模型复杂程度;根据验证集的不同,测试集输入后得到的结果存在差异,可以根据情况,选择符合我们需求的最优模型。
所述图像处理算法,具体步骤为:
C1,从图像中提取眼底区域;
C2,使用中值滤波对提取的眼底区域进行图像增强,对增强后的结果进行灰度归一化处理;
C3,对归一化处理的结果进行阈值分割,然后使用面积特征进行区域的筛选,获取分割结果。
使用FCN模型做出血、渗出识别还可以根据实际数据的分布情况,调整训练样本中病变样本、过检样本和漏检样本的比例,提高模型的精度和泛化能力,而无需重新设计算法,降低了算法开发强度,提高算法开发效率。
由于出血病灶区域的面积一般较大,因此适合使用训练好的出血病变识别模型进行图像中的出血区域分割。我们利用图像处理算法,即可获得对应的病变区域和非病变区域分割结果标签,得到出血病变识别模型,在训练中共标注1000张有出血的训练样本。为了抑制过检,加入了400张无出血的训练样本进行训练。经测试,模型对于眼底出血进行识别特异性达到89%,DR2数据中出血识别的敏感性达到89%,DR3数据出血识别的敏感性为100%。出血的识别效果图如附图4所示。
由于渗出与其他正常眼底结构在形态和色彩上有明显区别,相对于出血识别难度更小,因此使用渗出病变识别模型即可得到良好的识别结果。我们利用图像处理算法,即可获得对应的病变区域和非病变区域分割结果标签,得到渗出病变识别模型,在训练中共标注800张有渗出的训练样本。为了抑 制过检,加入了300张无渗出的训练样本以及100张与渗出相似的病变(神经纤维层、玻璃膜疣)样本进行训练。最终得到的模型对于判断图像是否有渗出的敏感性达到86%,特异性达到87%。渗出的识别效果如附图5所示。
由于每种病变的识别结果不可能百分之百准确,因此直接按照临床上的分期标准根据每种病变的识别结果进行分期会造成图片诊断结果的特异性较低,因此我们使用三种病变的识别结果作为特征训练支持向量机分类器,进行图片最终诊断结果的判断,对DR3识别的敏感性超过99%,DR2识别的敏感性达到85%,DR1识别的敏感性达到80%,特异性为80%。
通过上述方法,我们不仅在图片有无病变的识别上得到了一个较高的准确率,并且实现了对于图像中病灶区域位置的标注。
所述支持向量机分类器通过结合微血管瘤病变识别模型、出血病变识别模型和渗出病变识别模型识别结果提取得到的病变特征信息,将病变特征信息输入到SVM分类器中进行训练得到。
所述病变特征信息包括但不限于病变区域的个数、面积、形状、灰度、圆度和横纵比。
一种基于上述方法对糖尿病视网膜图像分类的系统,包括:
微血管瘤识别模块,用于使用微血管瘤病变识别模型对待检图像进行识别,标注待检图像中的微血管瘤病变部位并获取相应的病变特征参数;
出血识别模块,用于使用出血病变识别模型对待检图像进行识别,对识别得到的出血病变区域进行分割并获取相应的病变特征参数;
渗出识别模块,用于使用渗出病变识别模型对待检图像进行识别,对识别得到的渗出病变区域进行分割并获取相应的病变特征参数;
分类模块,用于对识别待检图像得到的各病变区域的特征参数进行分类以获取待检图像病变等级分类结果。
本发明实施例具有可扩展性,目前包含了三种典型糖尿病眼底病变的识别模型;随着疾病病理变化及检测需要,我们可以根据深度学习技术训练相应病变的识别模型,增加相应病变的识别模块。
需要强调的是:以上仅是本发明的较佳实施例而已,并非对本发明作任何形式上的限制,凡是依据本发明的技术实质对以上实施例所作的任何简单修改、等同变化与修饰,均仍属于本发明技术方案的范围内。

Claims (10)

  1. 一种基于深度学习的糖尿病视网膜图像分类方法,其特征在于,包括:
    获取待识别眼底图像;
    将同一待识别眼底图像分别导入微血管瘤病变识别模型、出血病变识别模型和渗出病变识别模型中进行识别;根据识别结果提取病变特征信息,再采用经训练的支持向量机分类器对提取的病变特征信息分类,获得眼底图像对应的病变等级分类结果;
    所述微血管瘤病变识别模型通过提取眼底图像中微血管瘤病变候选区域,进行微血管瘤病变区域和非微血管瘤病变区域标注,再输入CNN模型进行训练得到;
    所述出血病变识别模型通过对眼底图像中出血病变区域和非出血病变区域进行标注,再输入FCN模型进行训练得到;
    所述渗出病变识别模型通过对眼底图像中渗出病变区域和非渗出病变区域进行标注,再输入FCN模型进行训练得到。
  2. 根据权利要求1所述基于深度学习的糖尿病视网膜图像分类方法,其特征是,所述微血管瘤病变识别模型基于CNN模型训练得到,包括以下步骤:
    A1,图像预处理:提取绿色通道图像,对绿色通道图像使用r-polynomial变换进行图像灰度矫正,再利用高斯滤波进行去噪处理,得到矫正图像I′ W
    A2,微血管瘤病变候选区域I candidate提取:在矫正图像I′ W上随机选取一个像素点,以该像素点作为基准,以角度α为步长生成不同尺度的线性结构元素,利用生成的线性结构元素对矫正图像I′ W进行形态学处理,得到不同尺度线性结构元素的响应结果,保留每个像素点对应的最小响应结果I closed,得到I candidate=I closed-I′ W,对I candidate进行混合阈值分割提取;
    A3,数据标注:对微血管瘤病变候选区域I candidate分割提取的结果进行病变和非病变标注,生成训练集;
    A4,模型训练:输入训练集至CNN模型进行训练,得到微血管瘤病变识别模型。
  3. 根据权利要求2所述基于深度学习的糖尿病视网膜图像分类方法,其特征是,所述r-polynomial变换为:
    Figure PCTCN2018098390-appb-100001
    其中,r是多项式的幂,取值为2;μ min是灰度最小值,取值为0;μ max是灰度最大值,取值为1; G是提取的绿色通道图像;μ W(i,j)是绿色通道图像以(i,j)为中心、半径为W的邻域内灰度均值;I W是利用r-polynomial变换得到灰度均衡化图像。
  4. 根据权利要求2基于深度学习的糖尿病视网膜图像分类方法,其特征是,所述混合阈值分割提取的条件为:
    Figure PCTCN2018098390-appb-100002
    其中,K为常数,代表形态学处理中微血管瘤病变候选区域的最大个数,CC代表统计病变候选区域个数的函数;
    t l是设置的最小阈值,t u是最大阈值,t k为满足CC条件的阈值,t s是以0.001~0.004步长逐渐增长的阈值;
    将t s从I candidate的最小值按照最小灰度间隔增加到I candidate的最大灰度值,直到函数CC统计的个数满足上式的条件为止,使用阈值t K二值化I candidate提取得到微血管瘤病变候选区域的二值图。
  5. 根据权利要求1所述基于深度学习的糖尿病视网膜图像分类方法,其特征是,所述出血病变识别模型和渗出病变识别模型均基于FCN模型训练得到,包括以下步骤:
    B1,通过图像处理对眼底图像进行病变区域和非病变区域标注,生成训练集;
    B2,使用U-net网络结构构建FCN模型,每次随机取训练集中部分已标注数据进行训练,得到训练后的病变识别模型;训练采用的代价函数是DICE。
  6. 根据权利要求1、2或5所述基于深度学习的糖尿病视网膜图像分类方法,其特征是,所述对眼底图像的标注还生成测试集,对已训练模型采用测试集测试,评估已训练模型的识别能力。
  7. 根据权利要求1、2、5或6所述基于深度学习的糖尿病视网膜图像分类方法,其特征是,所述对眼底图像的标注还生成验证集,在模型训练中采用验证集进行修正,防止网络过拟合。
  8. 根据权利要求1所述基于深度学习的糖尿病视网膜图像分类方法,其特征是,所述支持向量机分类器通过对训练集对应的病变特征信息进行分类训练得到。
  9. 根据权利要求8所述基于深度学习的糖尿病视网膜图像分类方法,其特征是,所述病变特征 信息包括病变区域的个数、面积、形状、灰度、圆度和横纵比。
  10. 一种基于上述任一项权利要求所述方法对糖尿病视网膜图像分类的系统,包括:
    微血管瘤识别模块,用于使用微血管瘤病变识别模型对待检图像进行识别,标注待检图像中的微血管瘤病变部位并获取相应的病变特征参数;
    出血识别模块,用于使用出血病变识别模型对待检图像进行识别,对识别得到的出血病变区域进行分割并获取相应的病变特征参数;
    渗出识别模块,用于使用渗出病变识别模型对待检图像进行识别,对识别得到的渗出病变区域进行分割并获取相应的病变特征参数;
    分类模块,用于对识别待检图像得到的各病变区域的特征参数进行分类以获取待检图像病变等级分类结果。
PCT/CN2018/098390 2018-04-13 2018-08-02 基于深度学习的糖尿病视网膜图像分类方法及系统 WO2019196268A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18914825.7A EP3779786A4 (en) 2018-04-13 2018-08-02 DIABETIC RETINOPATHY IMAGE CLASSIFICATION METHOD AND SYSTEM BASED ON DEEP LEARNING
US16/835,199 US11132799B2 (en) 2018-04-13 2020-03-30 Method and system for classifying diabetic retina images based on deep learning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810330385.6 2018-04-13
CN201810330385.6A CN108615051B (zh) 2018-04-13 2018-04-13 基于深度学习的糖尿病视网膜图像分类方法及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/835,199 Continuation US11132799B2 (en) 2018-04-13 2020-03-30 Method and system for classifying diabetic retina images based on deep learning

Publications (1)

Publication Number Publication Date
WO2019196268A1 true WO2019196268A1 (zh) 2019-10-17

Family

ID=63659944

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/098390 WO2019196268A1 (zh) 2018-04-13 2018-08-02 基于深度学习的糖尿病视网膜图像分类方法及系统

Country Status (4)

Country Link
US (1) US11132799B2 (zh)
EP (1) EP3779786A4 (zh)
CN (1) CN108615051B (zh)
WO (1) WO2019196268A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853009A (zh) * 2019-11-11 2020-02-28 北京端点医药研究开发有限公司 基于机器学习的视网膜病理图像分析系统
CN112233087A (zh) * 2020-10-14 2021-01-15 武汉楚精灵医疗科技有限公司 一种基于人工智能的眼科超声疾病诊断方法和系统
CN112580580A (zh) * 2020-12-28 2021-03-30 厦门理工学院 一种基于数据增强与模型融合的病理性近视识别方法
CN112734774A (zh) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 一种高精度眼底血管提取方法、装置、介质、设备和系统
CN112819797A (zh) * 2021-02-06 2021-05-18 国药集团基因科技有限公司 一种糖尿病性视网膜病变分析方法、装置、系统、以及存储介质
CN113744212A (zh) * 2021-08-23 2021-12-03 江苏大学 一种基于显微光谱图像采集和图像处理算法的粮食真菌孢子智能识别方法
CN113793308A (zh) * 2021-08-25 2021-12-14 北京科技大学 一种基于神经网络的球团矿质量智能评级方法及装置
CN114898172A (zh) * 2022-04-08 2022-08-12 辽宁师范大学 基于多特征dag网络的糖尿病视网膜病变分类建模方法
CN116721760A (zh) * 2023-06-12 2023-09-08 东北林业大学 融合生物标志物的多任务糖尿病性视网膜病变检测算法

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596895B (zh) * 2018-04-26 2020-07-28 上海鹰瞳医疗科技有限公司 基于机器学习的眼底图像检测方法、装置及系统
CN111199794B (zh) * 2018-11-19 2024-03-01 复旦大学附属眼耳鼻喉科医院 一种适用于高度近视白内障的手术智能决策系统及其建立方法
CN109602391A (zh) * 2019-01-04 2019-04-12 平安科技(深圳)有限公司 眼底出血点的自动检测方法、装置及计算机可读存储介质
CN109886946B (zh) * 2019-02-18 2023-05-23 广州视源电子科技股份有限公司 基于深度学习的早期老年性黄斑病变弱监督分类方法
CN110009626A (zh) * 2019-04-11 2019-07-12 北京百度网讯科技有限公司 用于生成图像的方法和装置
CN110264443B (zh) * 2019-05-20 2024-04-16 平安科技(深圳)有限公司 基于特征可视化的眼底图像病变标注方法、装置及介质
KR102295426B1 (ko) * 2019-07-05 2021-08-30 순천향대학교 산학협력단 인공지능 기반의 안구질병 검출장치 및 방법
CN110610756A (zh) * 2019-07-26 2019-12-24 赛诺威盛科技(北京)有限公司 基于dicom图像信息实现胶片自动分类打印的方法
CN110414607A (zh) * 2019-07-31 2019-11-05 中山大学 胶囊内窥镜图像的分类方法、装置、设备及介质
CN110490138A (zh) * 2019-08-20 2019-11-22 北京大恒普信医疗技术有限公司 一种数据处理方法及装置、存储介质、电子设备
CN110555845A (zh) * 2019-09-27 2019-12-10 上海鹰瞳医疗科技有限公司 眼底oct图像识别方法及设备
CN110840402B (zh) * 2019-11-19 2021-02-26 山东大学 一种基于机器学习的房颤信号识别方法及系统
CN111179258A (zh) * 2019-12-31 2020-05-19 中山大学中山眼科中心 一种识别视网膜出血图像的人工智能方法及系统
CN113283270A (zh) * 2020-02-20 2021-08-20 京东方科技集团股份有限公司 图像处理方法和装置、筛查系统、计算机可读存储介质
CN111815574B (zh) * 2020-06-18 2022-08-12 南通大学 一种基于粗糙集神经网络的眼底视网膜血管图像分割方法
CN111754486B (zh) * 2020-06-24 2023-08-15 北京百度网讯科技有限公司 图像处理方法、装置、电子设备及存储介质
CN112053321A (zh) * 2020-07-30 2020-12-08 中山大学中山眼科中心 一种识别高度近视视网膜病变的人工智能系统
CN112037187B (zh) * 2020-08-24 2024-03-26 宁波市眼科医院 一种眼底低质量图片的智能优化系统
CN112016626B (zh) * 2020-08-31 2023-12-01 中科泰明(南京)科技有限公司 基于不确定度的糖尿病视网膜病变分类系统
CN111968107B (zh) * 2020-08-31 2024-03-12 合肥奥比斯科技有限公司 基于不确定度的早产儿视网膜病plus病变分类系统
CN112185523B (zh) * 2020-09-30 2023-09-08 南京大学 基于多尺度卷积神经网络的糖尿病视网膜病变分类方法
CN112184697B (zh) * 2020-10-15 2022-10-04 桂林电子科技大学 基于果蝇优化算法的糖尿病视网膜病变分级深度学习方法
CN112241766B (zh) * 2020-10-27 2023-04-18 西安电子科技大学 基于样本生成和迁移学习的肝脏ct图像多病变分类方法
CN112330624A (zh) 2020-11-02 2021-02-05 腾讯科技(深圳)有限公司 医学图像处理方法和装置
CN112446860B (zh) * 2020-11-23 2024-04-16 中山大学中山眼科中心 一种基于迁移学习的糖尿病黄斑水肿自动筛查方法
CN112489003A (zh) * 2020-11-25 2021-03-12 哈尔滨理工大学 一种基于深度学习的糖尿病视网膜病变区域定位检测方法
CN112634243B (zh) * 2020-12-28 2022-08-05 吉林大学 一种强干扰因素下基于深度学习的图像分类识别系统
TWI789199B (zh) * 2021-01-13 2023-01-01 高雄醫學大學 眼底鏡影像預測糖尿病性腎病變期程之方法及其系統
CN112869697A (zh) * 2021-01-20 2021-06-01 深圳硅基智能科技有限公司 同时识别糖尿病视网膜病变的分期和病变特征的判断方法
CN112883962B (zh) * 2021-01-29 2023-07-18 北京百度网讯科技有限公司 眼底图像识别方法、装置、设备、存储介质以及程序产品
CN112869704B (zh) * 2021-02-02 2022-06-17 苏州大学 一种基于循环自适应多目标加权网络的糖尿病视网膜病变区域自动分割方法
CN113344842A (zh) * 2021-03-24 2021-09-03 同济大学 一种超声图像的血管标注方法
CN113012148A (zh) * 2021-04-14 2021-06-22 中国人民解放军总医院第一医学中心 一种基于眼底影像的糖尿病肾病-非糖尿病肾病鉴别诊断装置
CN113487621A (zh) * 2021-05-25 2021-10-08 平安科技(深圳)有限公司 医学图像分级方法、装置、电子设备及可读存储介质
CN113627231B (zh) * 2021-06-16 2023-10-31 温州医科大学 一种基于机器视觉的视网膜oct图像中液体区域自动分割方法
CN113537298A (zh) * 2021-06-23 2021-10-22 广东省人民医院 一种视网膜图像分类方法及装置
CN113705569A (zh) * 2021-08-31 2021-11-26 北京理工大学重庆创新中心 一种图像标注方法及系统
CN114305391A (zh) * 2021-11-04 2022-04-12 上海市儿童医院 一种深度学习对尿道下裂阴茎弯曲度的测量方法
CN114494196B (zh) * 2022-01-26 2023-11-17 南通大学 基于遗传模糊树的视网膜糖尿病变深度网络检测方法
CN114359279B (zh) * 2022-03-18 2022-06-03 武汉楚精灵医疗科技有限公司 图像处理方法、装置、计算机设备及存储介质
CN115170503B (zh) * 2022-07-01 2023-12-19 上海市第一人民医院 基于决策规则和深度神经网络的眼底图像视野分类方法及装置
CN116168255B (zh) * 2023-04-10 2023-12-08 武汉大学人民医院(湖北省人民医院) 一种长尾分布鲁棒的视网膜oct图像分类方法
CN116977320B (zh) * 2023-08-14 2024-04-26 中山火炬开发区人民医院 Ct+mri病变区域突出预估系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870838A (zh) * 2014-03-05 2014-06-18 南京航空航天大学 糖尿病视网膜病变的眼底图像特征提取方法
WO2016132115A1 (en) * 2015-02-16 2016-08-25 University Of Surrey Detection of microaneurysms
CN107203778A (zh) * 2017-05-05 2017-09-26 平安科技(深圳)有限公司 视网膜病变程度等级检测系统及方法
CN107330449A (zh) * 2017-06-13 2017-11-07 瑞达昇科技(大连)有限公司 一种糖尿病性视网膜病变体征检测方法及装置
CN107423571A (zh) * 2017-05-04 2017-12-01 深圳硅基仿生科技有限公司 基于眼底图像的糖尿病视网膜病变识别系统
CN107680684A (zh) * 2017-10-12 2018-02-09 百度在线网络技术(北京)有限公司 用于获取信息的方法及装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005102303A2 (en) * 2004-04-21 2005-11-03 Advanced Ocular Systems Limited Antiprostaglandins for the treatment of ocular pathologies
JP5778762B2 (ja) * 2010-05-25 2015-09-16 ザ ジェネラル ホスピタル コーポレイション 光コヒーレンストモグラフィー画像のスペクトル解析のための装置及び方法
US8885901B1 (en) * 2013-10-22 2014-11-11 Eyenuk, Inc. Systems and methods for automated enhancement of retinal images
US10722115B2 (en) * 2015-08-20 2020-07-28 Ohio University Devices and methods for classifying diabetic and macular degeneration
CN105513077B (zh) * 2015-12-11 2019-01-04 北京大恒图像视觉有限公司 一种用于糖尿病性视网膜病变筛查的系统
JP2020518915A (ja) * 2017-04-27 2020-06-25 パスハラキス スタブロスPASCHALAKIS, Stavros 自動眼底画像分析用のシステムおよび方法
CN107123124B (zh) * 2017-05-04 2020-05-12 季鑫 视网膜图像分析方法、装置和计算设备
US11132797B2 (en) * 2017-12-28 2021-09-28 Topcon Corporation Automatically identifying regions of interest of an object from horizontal images using a machine learning guided imaging system
WO2019231102A1 (ko) * 2018-05-31 2019-12-05 주식회사 뷰노 피검체의 안저 영상을 분류하는 방법 및 이를 이용한 장치

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870838A (zh) * 2014-03-05 2014-06-18 南京航空航天大学 糖尿病视网膜病变的眼底图像特征提取方法
WO2016132115A1 (en) * 2015-02-16 2016-08-25 University Of Surrey Detection of microaneurysms
CN107423571A (zh) * 2017-05-04 2017-12-01 深圳硅基仿生科技有限公司 基于眼底图像的糖尿病视网膜病变识别系统
CN107203778A (zh) * 2017-05-05 2017-09-26 平安科技(深圳)有限公司 视网膜病变程度等级检测系统及方法
CN107330449A (zh) * 2017-06-13 2017-11-07 瑞达昇科技(大连)有限公司 一种糖尿病性视网膜病变体征检测方法及装置
CN107680684A (zh) * 2017-10-12 2018-02-09 百度在线网络技术(北京)有限公司 用于获取信息的方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3779786A4

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853009B (zh) * 2019-11-11 2023-04-28 北京端点医药研究开发有限公司 基于机器学习的视网膜病理图像分析系统
CN110853009A (zh) * 2019-11-11 2020-02-28 北京端点医药研究开发有限公司 基于机器学习的视网膜病理图像分析系统
CN112233087A (zh) * 2020-10-14 2021-01-15 武汉楚精灵医疗科技有限公司 一种基于人工智能的眼科超声疾病诊断方法和系统
CN112580580A (zh) * 2020-12-28 2021-03-30 厦门理工学院 一种基于数据增强与模型融合的病理性近视识别方法
CN112734774A (zh) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 一种高精度眼底血管提取方法、装置、介质、设备和系统
CN112819797A (zh) * 2021-02-06 2021-05-18 国药集团基因科技有限公司 一种糖尿病性视网膜病变分析方法、装置、系统、以及存储介质
CN112819797B (zh) * 2021-02-06 2023-09-19 国药集团基因科技有限公司 糖尿病性视网膜病变分析方法、装置、系统及存储介质
CN113744212A (zh) * 2021-08-23 2021-12-03 江苏大学 一种基于显微光谱图像采集和图像处理算法的粮食真菌孢子智能识别方法
CN113793308A (zh) * 2021-08-25 2021-12-14 北京科技大学 一种基于神经网络的球团矿质量智能评级方法及装置
CN114898172A (zh) * 2022-04-08 2022-08-12 辽宁师范大学 基于多特征dag网络的糖尿病视网膜病变分类建模方法
CN114898172B (zh) * 2022-04-08 2024-04-02 辽宁师范大学 基于多特征dag网络的糖尿病视网膜病变分类建模方法
CN116721760A (zh) * 2023-06-12 2023-09-08 东北林业大学 融合生物标志物的多任务糖尿病性视网膜病变检测算法
CN116721760B (zh) * 2023-06-12 2024-04-26 东北林业大学 融合生物标志物的多任务糖尿病性视网膜病变检测算法

Also Published As

Publication number Publication date
US20200234445A1 (en) 2020-07-23
US11132799B2 (en) 2021-09-28
EP3779786A4 (en) 2022-01-12
CN108615051A (zh) 2018-10-02
CN108615051B (zh) 2020-09-15
EP3779786A1 (en) 2021-02-17

Similar Documents

Publication Publication Date Title
WO2019196268A1 (zh) 基于深度学习的糖尿病视网膜图像分类方法及系统
US11666210B2 (en) System for recognizing diabetic retinopathy
Kumar et al. Diabetic retinopathy detection by extracting area and number of microaneurysm from colour fundus image
CN109472781B (zh) 一种基于串行结构分割的糖尿病视网膜病变检测系统
Issac et al. An adaptive threshold based image processing technique for improved glaucoma detection and classification
Sopharak et al. Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods
Saleh et al. An automated decision-support system for non-proliferative diabetic retinopathy disease based on MAs and HAs detection
Palavalasa et al. Automatic diabetic retinopathy detection using digital image processing
Punnolil A novel approach for diagnosis and severity grading of diabetic maculopathy
Jaafar et al. Automated detection of red lesions from digital colour fundus photographs
WO2020224282A1 (zh) 一种婴幼儿粪便采样图像分类处理系统及方法
Ni Ni et al. Anterior chamber angle shape analysis and classification of glaucoma in SS-OCT images
Agrawal et al. A survey on automated microaneurysm detection in diabetic retinopathy retinal images
Fang et al. Diabetic retinopathy classification using a novel DAG network based on multi-feature of fundus images
Prentasic et al. Weighted ensemble based automatic detection of exudates in fundus photographs
Priya et al. Detection and grading of diabetic retinopathy in retinal images using deep intelligent systems: a comprehensive review
Sreejini et al. Severity grading of DME from retina images: A combination of PSO and FCM with Bayes classifier
Cheng et al. Automated cell nuclei segmentation from microscopic images of cervical smear
Lokuarachchi et al. Detection of red lesions in retinal images using image processing and machine learning techniques
Dafwen Toresa et al. Automated Detection and Counting of Hard Exudates for Diabetic Retinopathy by using Watershed and Double Top-Bottom Hat Filtering Algorithm
Waseem et al. Drusen detection from colored fundus images for diagnosis of age related Macular degeneration
Ashame et al. Abnormality Detection in Eye Fundus Retina
Athab et al. Disc and Cup Segmentation for Glaucoma Detection
Feroui et al. New segmentation methodology for exudate detection in color fundus images
Manjaramkar et al. Automated red lesion detection: an overview

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18914825

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018914825

Country of ref document: EP

Effective date: 20201113