CN114549541A - Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium - Google Patents
Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium Download PDFInfo
- Publication number
- CN114549541A CN114549541A CN202011246515.1A CN202011246515A CN114549541A CN 114549541 A CN114549541 A CN 114549541A CN 202011246515 A CN202011246515 A CN 202011246515A CN 114549541 A CN114549541 A CN 114549541A
- Authority
- CN
- China
- Prior art keywords
- model
- cardiovascular
- cerebrovascular diseases
- data set
- occurrence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000002526 effect on cardiovascular system Effects 0.000 title claims abstract description 105
- 208000024172 Cardiovascular disease Diseases 0.000 title claims abstract description 97
- 208000026106 cerebrovascular disease Diseases 0.000 title claims abstract description 93
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000003860 storage Methods 0.000 title claims abstract description 9
- 238000012549 training Methods 0.000 claims abstract description 66
- 238000012795 verification Methods 0.000 claims abstract description 51
- 201000010099 disease Diseases 0.000 claims abstract description 38
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 38
- 238000000605 extraction Methods 0.000 claims abstract description 35
- 230000000694 effects Effects 0.000 claims abstract description 27
- 238000013058 risk prediction model Methods 0.000 claims abstract description 19
- 238000013136 deep learning model Methods 0.000 claims abstract description 7
- 238000004364 calculation method Methods 0.000 claims description 38
- 238000012360 testing method Methods 0.000 claims description 31
- 230000008569 process Effects 0.000 claims description 17
- 230000004913 activation Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 15
- 238000011176 pooling Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 12
- 238000010586 diagram Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 7
- 238000013459 approach Methods 0.000 claims description 6
- 231100001011 cardiovascular lesion Toxicity 0.000 claims description 3
- 238000013145 classification model Methods 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 3
- 230000003902 lesion Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000001902 propagating effect Effects 0.000 claims description 3
- 238000012804 iterative process Methods 0.000 claims 1
- 230000033115 angiogenesis Effects 0.000 abstract description 2
- 210000004204 blood vessel Anatomy 0.000 description 28
- 230000011218 segmentation Effects 0.000 description 15
- 238000003062 neural network model Methods 0.000 description 10
- 238000011156 evaluation Methods 0.000 description 8
- 230000009467 reduction Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 230000002708 enhancing effect Effects 0.000 description 4
- 230000036285 pathological change Effects 0.000 description 4
- 231100000915 pathological change Toxicity 0.000 description 4
- 241000894007 species Species 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 230000005284 excitation Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 239000002245 particle Substances 0.000 description 3
- 241000282414 Homo sapiens Species 0.000 description 2
- 206010025421 Macule Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 239000000049 pigment Substances 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 201000001320 Atherosclerosis Diseases 0.000 description 1
- 206010008190 Cerebrovascular accident Diseases 0.000 description 1
- 208000031226 Hyperlipidaemia Diseases 0.000 description 1
- 206010020772 Hypertension Diseases 0.000 description 1
- 208000006011 Stroke Diseases 0.000 description 1
- 238000002583 angiography Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000008021 deposition Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 208000031169 hemorrhagic disease Diseases 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000023589 ischemic disease Diseases 0.000 description 1
- 230000000302 ischemic effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 150000002632 lipids Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005549 size reduction Methods 0.000 description 1
- 230000009885 systemic effect Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Radiology & Medical Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Quality & Reliability (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses a method and a system for predicting the occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and a storage medium. The method comprises the following steps: s1, acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of cardiovascular and cerebrovascular diseases followed for three years and five years; s2, the fundus images of the sample data set are used as input data, the occurrence conditions of cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits in the fundus images are used as data labels, the data labels are input into a deep learning model increment-ResNet-V2 to be subjected to feature extraction, training and verification, and a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images is constructed; and S3, inputting the fundus image of the patient to be detected into the constructed prediction model, and outputting a prediction result, wherein the prediction result is the disease category classification and occurrence risk grade classification of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years. The method provided by the invention can be used for predicting the type and risk of cardiovascular and cerebrovascular angiogenesis based on the fundus image, and is simple, convenient, high in prediction accuracy and good in effect.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method, a system, computer equipment and a storage medium for predicting the occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images.
Background
Cardiovascular and cerebrovascular diseases are the general names of cardiovascular and cerebrovascular diseases, and generally refer to ischemic or hemorrhagic diseases of heart, brain and systemic tissues caused by hyperlipidemia, blood viscosity, atherosclerosis, hypertension and the like. The cardiovascular and cerebrovascular diseases are common diseases seriously threatening the health of human beings, particularly the middle-aged and old people over 50 years old, have the characteristics of high morbidity, high disability rate and high mortality, even if the most advanced and perfect treatment means at present are applied, more than 50 percent of cerebrovascular accident survivors can not completely take care of the life, the number of people dying from the cardiovascular and cerebrovascular diseases in each year in the world reaches 1500 thousands, and the people live at the first of various causes of death. Early warning of cardiovascular and cerebrovascular diseases is realized, disease management and prevention are carried out in advance, serious injuries such as disability and death caused by cardiovascular diseases can be effectively relieved or even reduced, family, medical and social burdens caused by the diseases are reduced, and great social and economic benefits are achieved.
At present, auxiliary diagnostic technologies such as CT, MRI, angiography, Doppler ultrasound and the like are needed for diagnosing cardiovascular diseases, and the positions and the properties of cardiovascular and cerebrovascular diseases are found. However, there is no report and clinical application of relevant medical technology about the prediction of cardiovascular and cerebrovascular diseases, and the establishment of an effective and clinically applicable cardiovascular and cerebrovascular disease prediction technology has important significance on the effective prevention and control of cardiovascular and cerebrovascular diseases.
The fundus image blood vessels can accurately reflect the whole blood vessel condition, particularly the pathological changes of the capillaries and capillaries, and have important significance for prompting the cardiovascular and cerebrovascular pathological changes. The thickness and cross-impression of the blood vessels also have great prompting effect on reflecting the disease types and severity of cardiovascular and cerebrovascular diseases. In addition, the deposition of pigment particles such as lipid particles in blood vessels indicates high risk of cardiovascular and cerebrovascular diseases, and is of great importance for determining high-risk groups with cardiovascular and cerebrovascular diseases. Therefore, the type and risk of cardiovascular and cerebrovascular angiogenesis can be effectively predicted based on the fundus image.
In the conventional technology, a doctor can perform manual prediction analysis on fundus image data of a patient, but the manual identification difficulty is high, the identification accuracy rate is low in efficiency and slow, and analysis of a large amount of data cannot be performed. With the development of science and technology, artificial intelligence technologies such as deep learning and the like are combined in medical diagnosis, so that doctors can be effectively helped to analyze various data, but color systems of all parts of eye ground images are yellow, the contrast is low, eye ground blood vessels grow in a network shape, blood vessels are numerous in branches, different in thickness and crossed with one another, so that the common deep learning network model is high in difficulty in identifying graph features, poor in information extraction capability and high in universality, and the influence of crowd difference is large.
Disclosure of Invention
The invention aims to overcome at least one defect (deficiency) of the prior art, and provides a cardiovascular and cerebrovascular disease occurrence type and risk prediction method, a system, computer equipment and a storage medium based on fundus images, which are used for realizing the prediction of the cardiovascular and cerebrovascular disease occurrence type and risk based on fundus images, solving the problems of high difficulty in pattern feature recognition, poor information extraction capability and great influence of population difference on universality of a common deep learning network model, and achieving the effect of high prediction accuracy.
In order to solve the technical problems, the invention provides a technical scheme that a cardiovascular and cerebrovascular disease occurrence type and risk prediction method based on fundus images comprises the following steps:
s1, acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases followed for three to five years, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases occurring for three years and five years;
s2, the concentrated fundus images of the sample data are used as input data, the occurrence conditions of cardiovascular and cerebrovascular diseases followed for three years and five years are used as data labels, the data labels are input into a deep learning model increment-ResNet-V2 for feature extraction, training and verification, and a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images is constructed, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and S3, inputting the fundus image of the patient to be detected into the constructed prediction model, and outputting a prediction result, wherein the prediction result comprises disease category classification and occurrence risk grade classification of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
The model increment-ResNet-V2 combines the advantages of the increment and ResNet models, is based on a variant of the early released increment-V3 model, and is a convolutional neural network for obtaining apex accuracy on the basis of ILSVRC image classification. The fundus image content includes the retina, optic disc, macula, arteriovenous vessels and their branches. The color system of each part of the fundus image is yellow, the contrast is low, the model increment-ResNet-V2 has strong picture contrast enhancement capability, weak color difference of different regions can be enhanced highly, information difference information among pixel points is extracted, and accurate positioning and segmentation among different structures and between normal parts and pathological change parts of the same structure are achieved in an auxiliary mode. Meanwhile, the model is trained on the basis of dimension reduction without losing image information, the information extraction capability of the model is greatly improved, in addition, the model does not need to manually determine which filter is used or whether pooling is needed, the parameters are automatically determined by the model, and through the large sample amount learning of the fundus picture, the model automatically and clearly determines what parameters are needed, and which filters are combined, so that the model has better self-adaptive characteristic, and the technical problem can be effectively solved.
Further, the step S1 of obtaining a sample data set of cardiovascular and cerebrovascular diseases includes the specific steps of: the fundus images in the sample data set are cut into images with the same size, and the sample data set is divided into a training data set, a verification data set and a test data set.
The fundus images are cut into images with the same size so as to unify input image data and avoid prediction result deviation caused by inconsistent sizes, the sample data set is divided so as to reasonably distribute the obtained sample data set, the training data set is used for carrying out the characteristic extraction and model training process, the verification data set is used for carrying out the training result of the verification model for updating parameters each time, and the test data set is used for detecting the final prediction effect of the model after the final model updates the parameters.
Further, in the step S2, the input data is input into the inclusion-ResNet-V2 model for feature extraction, and the specific steps include:
s21, performing primary feature extraction on the image data after the training data are centrally cut through a stem module, specifically, performing convolution calculation with 3 x 3 and step length of 2 on the image data to extract a feature Y1, performing convolution calculation with 3 x 3 twice on Y1 to obtain a feature Y2, performing convolution calculation with 3 x 3 and step length of 2 on Y2 once and convolution calculation with 3 x 3 once respectively, performing feature splicing on the two outputs to obtain a feature Y3, performing convolution calculation with 1 x 1, 3 x 3 and 1 x 1, 7 x 1, 1 x 7 and 3 x 3 respectively on Y3, splicing the output features to obtain a feature Y4, and finally performing convolution calculation with 3 x 3 and maximum convolution operation on Y4 to obtain a feature Y5;
s22, performing feature extraction by taking Y5 as the input of a module increment-ResNet-A, specifically, performing Relu activation layer processing on the feature Y5 output by the stem module, performing 1 × 1, 3 × 3 and 3 × 3 convolution calculation on three output features respectively, performing 1 × 1 linear convolution activation on the three output features to obtain a feature Y6, performing feature splicing on Y6 and the input activated feature Y5, and finally performing Relu activation to obtain an output feature Y7;
s23, taking Y7 as input of a module Reduction-A, further abstracting and extracting characteristics Y8 through 3-path convolution and maximum pooling calculation, inputting Y8 into an inclusion-ResNet-B module to obtain characteristics Y9, inputting Reduction-B to reduce the characteristic size to obtain characteristics Y10, inputting the characteristics Y10 into an inclusion-ResNet-C module to perform final convolution calculation, and obtaining characteristics Y11;
s24, performing pooling operation on the characteristic Y11, performing dropout operation by reserving the node number to be 0.8, and finally classifying the characteristic Y11 in a softmax classifier to obtain two classification outputs of one-dimensional characteristic vectors 1 x 2, mapping the output results to cardiovascular lesions or cerebrovascular lesions of three-year patients respectively, and constructing a three-year disease species prediction model.
And S25, repeating the steps of S21-S23, constructing a three-year-occurring risk grade classification model, performing pooling operation on the characteristic Y11, performing dropout operation for keeping the node number to be 0.8, and finally classifying in a softmax classifier to obtain an output of a one-dimensional characteristic vector 1 x 3, wherein the output result is mapped to the risk of the three-year-old patient suffering from cardiovascular and cerebrovascular diseases, specifically low risk, medium risk and high risk.
S26, repeating the steps of S21-S25, and constructing a five-year-old patient cardiovascular and cerebrovascular disease type and occurrence risk level prediction model.
Further, the convolution calculation formula in the inclusion-ResNet-V2 model is as follows:
whereinRepresents the value of the jth characteristic diagram of the ith layer at (x, y); piAnd QiRepresents the size of the convolution kernel used by the ith layer;representing the weight of a convolution kernel with the size of (p, q) connecting the mth characteristic diagram of the i-1 layer and the jth characteristic diagram of the ith layer at the (x + p, y + q) point;
the Relu activation layer calculation formula is as follows:
F(a)=max(0,a)
wherein a represents the input of the active layer, and F (a) represents the output of the active layer;
the calculation formula of the Softmax classifier is as follows:
wherein z isjIs the jth input variable, M is the number of input variables,the probability of outputting a category j is represented for the output.
In the fundus image, fundus blood vessels grow in a network shape, blood vessels are numerous in branches, different in thickness and mutually crossed, and model identification is difficult. However, the thickness and cross-impression of the blood vessels have great prompting effect on the reaction of cardiovascular and cerebrovascular diseases. The model can realize accurate segmentation of more than 95% of fundus blood vessel information, and the expression capacity of the network is improved by adding the nonlinear excitation Relu active layer; the nonlinear characteristic can be greatly increased on the premise of keeping the scale of the convolution image unchanged (namely, resolution is not lost), namely, the nonlinear activation function Relu is utilized, and accurate image segmentation is realized. Meanwhile, the model has a plurality of 1 × 1 convolution operations, so that the function of reducing the dimension is achieved, the model is trained on the basis of reducing the dimension without losing image information, and the information extraction capability of the model is greatly improved. The model also comprises three inclusion-ResNet modules, wherein the inclusion-ResNet modules contain Residual connection Residual error networks, network convergence can be accelerated, the training speed of the network is accelerated, the training errors are gradually reduced along with the increase of the network depth, and the Reduction modules among the inclusion-ResNet modules play a role of a pooling layer and also play a role in dimension Reduction. In addition, the model network of the invention has a softmax branch, even if the hidden unit and the middle layer also participate in the characteristic calculation, the model network can also participate in the result of the prediction picture, thereby playing an adjusting effect in the whole network model and preventing overfitting so as to adapt to the prediction needs of different people for cardiovascular and cerebrovascular diseases based on fundus pictures.
Further, the step S2 inputs the input data into the inclusion-ResNet-V2 model for training, and the specific steps include:
s27, inputting the image data cut from the training data set into an increment-ResNet-V2 model, obtaining an output feature vector, namely a feature vector output in a classification mode, by a model network through forward propagation, and calculating the output feature vector through a loss function cross entropy formula to obtain loss, wherein the loss function cross entropy formula has the following calculation formula:
wherein, P (x)i) Representing the true probability of belonging to the ith sample, q (x)i) Representing the prediction probability of the ith sample;
and S28, reversely propagating to the model network in a gradient updating mode to update the network parameters, and repeating the updating iteration process of the parameters to enable the classification output of the model to continuously approach to the real label.
Further, in the step S2, the input data is input into the inclusion-ResNet-V2 model for verification, where the verification process includes internal verification and external verification, the internal verification is that in the training process, the data of the verification data set is used for testing at the end of each iteration, and the test result reflects the model training effect of the current iteration; and the external verification is that after the final iteration is finished, the data in the test data set is used for testing, and the test result reflects the final performance effect of the model.
The verification process of the model refers to reflecting the prediction effect of the model through certain indexes, and common classification evaluation indexes comprise accuracy, specificity, sensitivity, positive prediction values, negative prediction values, AUC curves, loss value curves and the like. The indexes are obtained by classifying the input picture data through the model, calculating the deviation between the prediction situation and the actual situation of all input positive and negative sample, and reflecting the prediction capability of the model on the positive and negative sample. In the training process of the model, testing is carried out by using data of the verification data set through internal verification, namely when each iteration is finished, and the classification evaluation index of the test result reflects the model training effect of the iteration. And after the model training process is finished in a certain iteration period, or the evaluation index of the verification set is not promoted any more in a certain iteration period, namely the model training is considered to be finished. And after the final iteration is finished, namely the model training is finished, the effect verification of the model is carried out in the externally tested test set data, and the evaluation index of the model prediction capability of the test data set reflects the final performance effect of the model.
In order to solve the above technical problems, another technical solution provided by the present invention is a cardiovascular and cerebrovascular disease occurrence category and risk prediction system based on fundus images, the system comprising:
the acquisition module is used for acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases of three-to five-year follow-up visits, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases occurring for three years and five years;
the data processing module is used for cutting the fundus images in the sample data set into images with the same size, and dividing the sample data set into a training data set, a verification data set and a test data set;
the model building module is used for inputting the fundus images concentrated by the sample data into a deep learning model increment-ResNet-V2 for feature extraction, training and verification by taking the occurrence conditions of cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits as data labels, and building a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and the prediction module is used for inputting the fundus image of the patient to be detected into the constructed prediction model to obtain the prediction result of the patient to be detected, and the prediction result comprises the classification of the disease types and the generation risk grade of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
Further, the model building module comprises:
the feature extraction module is used for inputting the image data in the training data set into an increment-ResNet-V2 model for feature extraction;
the model training module is used for inputting image data in a large number of training data sets into an increment-ResNet-V2 model for training, and enabling the classification output of the model to continuously approach to a real label through continuous updating iteration of network parameters;
and the model verification module is used for verifying the training and performance effects of the model.
In order to solve the technical problem, another technical solution provided by the present invention is a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the method steps for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images when executing the computer program.
In order to solve the technical problem, another technical solution of the present invention is a storage medium having a computer program stored thereon, wherein the computer program is executed by a processor to implement the method steps of predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images.
Compared with the prior art, the invention has the beneficial effects that:
1. the method is based on the fundus images, and the fundus images are analyzed and processed through the inclusion-ResNet-V2 neural network model, so that the cardiovascular and cerebrovascular occurrence types and risks are predicted, and the method is simple, convenient and fast without the assistance of other electronic images;
2. the Incep-ResNet-V2 model comprises a nonlinear excitation Relu activation layer, so that the expression capability of the network is improved, and accurate image segmentation is realized;
3. the Incep-ResNet-V2 model comprises a plurality of 1 x 1 convolution operations, which play a role in dimension reduction, so that the model is trained on the basis of dimension reduction without loss of image information, and the information extraction capability of the model is greatly improved;
4. the Incep-ResNet-V2 model comprises an Incep-ResNet module which comprises a residual error network, so that the network convergence can be accelerated, the training speed of the network is accelerated, and the training error is gradually reduced along with the increase of the network depth;
5. according to the invention, softmax is used as a classifier in the inclusion-ResNet-V2 model, an adjustment effect is achieved in the whole network model, overfitting is prevented, so that the prediction requirements of different groups of people on cardiovascular and cerebrovascular diseases based on fundus pictures are met, and the universality of the model is enhanced.
Drawings
Fig. 1 is a schematic flow chart of a prediction method in embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of an inclusion-ResNet-V2 model architecture in embodiment 1 of the present invention.
FIG. 3 is a schematic flow chart of a stem module in the inclusion-ResNet-V2 model in embodiment 1 of the present invention.
Fig. 4 is a schematic flow chart of an inclusion-ResNet-a module in the inclusion-ResNet-V2 model in embodiment 1 of the present invention.
Fig. 5 is a schematic structural diagram of a prediction system in embodiment 2 of the present invention.
Fig. 6 is a schematic structural diagram of a model building module in the prediction system in embodiment 2 of the present invention.
Fig. 7 is a normal fundus image in embodiment 1 of the present invention.
Fig. 8 is a fundus image of a patient with cardiovascular disease in example 1 of the present invention.
Fig. 9 shows one of the modes of contrast enhancement of a normal fundus image in embodiment 1 of the present invention.
Fig. 10 shows one of the modes of contrast enhancement of a normal fundus image in embodiment 1 of the present invention.
Fig. 11 is one of the two modes of vessel segmentation for enhancing the contrast of the fundus image by other neural network models in embodiment 1 of the present invention.
Fig. 12 is one of the two blood vessel segmentation cases of the neural network model for enhancing the contrast of the fundus image according to embodiment 1 of the present invention.
Fig. 13 shows the segmentation of the small blood vessels in the two modes of enhancing the contrast of the fundus image by the neural network model in embodiment 1 of the present invention.
Fig. 14 shows one of the blood vessel segmentation cases of the inclusion-ResNet-V2 model in the prediction method according to embodiment 1 of the present invention for the two modes of contrast enhancement of the fundus image.
Fig. 15 shows one of the blood vessel segmentation cases of the inclusion-ResNet-V2 model in the prediction method according to embodiment 1 of the present invention for the two modes of contrast enhancement of the fundus image.
FIG. 16 is a diagram showing the hard exudation of the fundus image of a patient with cardiovascular disease in example 1 of the present invention.
FIG. 17 shows early stage small range punctate hard exudation of fundus images of patients with cardiovascular diseases in example 1 of the present invention.
Fig. 18 shows the hard bleed information extraction of the inclusion-ResNet-V2 model in the prediction method according to embodiment 1 of the present invention for one of the two modes of contrast enhancement of the fundus image.
Fig. 19 shows the hard bleed information extraction of the inclusion-ResNet-V2 model in the prediction method according to embodiment 1 of the present invention for one of the two modes of contrast enhancement of the fundus image.
Fig. 20 shows the hard exudation information extraction of the fundus image by other neural network models in embodiment 1 of the present invention.
Description of reference numerals: the system comprises an acquisition module 10, a data processing module 20, a model construction module 30, a prediction module 40, a feature extraction module 31, a model training module 32 and a model verification module 33.
Detailed Description
The drawings are only for purposes of illustration and are not to be construed as limiting the invention. For a better understanding of the following embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
Example 1
As shown in fig. 1, the present embodiment provides a method for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, the method including the following steps:
s1, acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases followed for three to five years, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases occurring for three years and five years;
s2, collecting the sample data to obtain fundus images as input data, inputting the fundus images as data labels to a deep learning model increment-ResNet-V2 for feature extraction, training and verification by taking the occurrence conditions of cardiovascular and cerebrovascular diseases followed up for three years and five years as data labels, and constructing a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and S3, inputting the fundus image of the patient to be detected into the constructed prediction model, and outputting a prediction result, wherein the prediction result comprises disease category classification and occurrence risk grade classification of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
The model increment-ResNet-V2 combines the advantages of the increment and ResNet models, is based on a variant of the early released increment-V3 model, and is a convolutional neural network for obtaining apex accuracy on the basis of ILSVRC image classification. The fundus image content includes the retina, optic disc, macula, arteriovenous vessels and their branches. The color system of each part of the fundus image is yellow, the contrast is low, the model increment-ResNet-V2 has strong picture contrast enhancement capability, weak color difference of different regions can be enhanced highly, information difference information among pixel points is extracted, and accurate positioning and segmentation among different structures and between normal parts and pathological change parts of the same structure are achieved in an auxiliary mode. Meanwhile, the model is trained on the basis of dimension reduction without losing image information, the information extraction capability of the model is greatly improved, in addition, the model does not need to manually determine which filter is used or whether pooling is needed, the parameters are automatically determined by the model, and through the large sample amount learning of the fundus picture, the model automatically and clearly determines what parameters are needed, and which filters are combined, so that the model has better self-adaptive characteristic, and the technical problem can be effectively solved.
Further, the step S1 of obtaining a sample data set of cardiovascular and cerebrovascular diseases includes the specific steps of: the fundus images in the sample data set are cut into images with the same size, the fundus images are cut into 299 × 3 channel image data in the embodiment, and the sample data set is divided into a training data set, a verification data set and a test data set.
The fundus images are cut into images with the same size so as to unify input image data and avoid prediction result deviation caused by inconsistent sizes, the sample data set is divided so as to reasonably distribute the obtained sample data set, the training data set is used for carrying out the characteristic extraction and model training process, the verification data set is used for carrying out the training result of the verification model for updating parameters each time, and the test data set is used for detecting the final prediction effect of the model after the final model updates the parameters.
Further, as shown in fig. 2, in this embodiment, in the step S2, the input data is input into the inclusion-ResNet-V2 model for feature extraction, and the specific steps include:
s21, as shown in fig. 3, the 3-channel image data of 299 × 3 after the training data is centrally clipped is subjected to the initial feature extraction by the stem module, specifically, the image data is subjected to the convolution calculation of 3 × 3 with a step size of 2 to extract the feature Y1 of 149 × 32, then the Y1 is subjected to the convolution calculation of 3 × 3 twice to obtain the feature Y2 of 147 × 64, then the maximum pooling operation of 3 × 3 with a step size of 2 and the convolution calculation of 3 × 3 are respectively performed on Y2, then the two outputs are subjected to the feature concatenation to obtain the feature Y3 with the feature size of the channel number increasing unchanged, the feature size is 73 × 160, then the Y3 is subjected to the feature concatenation operation of 1 × 1, 3 × 3 and 1 × 1, 7 × 1, 1 × 7, 3 × 3, and the feature output is spliced to obtain the maximum feature Y3771, and the final feature pool 71 is calculated and the maximum pool 71 is calculated by the convolution operation of Y3, after the characteristics are spliced, characteristics Y5 of 35X 384 are obtained;
s22, extracting features from Y5 as input of the module inclusion-ResNet-a, specifically, as shown in fig. 4, first performing Relu activation layer processing on the feature Y5 output by the stem module, then performing 1 × 1, 3 × 3 and 1 × 1, 3 × 3 three-way convolution calculation, performing 1 × 1 linear convolution activation on three-way output features to obtain a feature Y6, performing feature splicing on Y6 and the input activated feature Y5, and finally performing Relu activation to obtain a feature Y7 outputting 35 × 384;
s23, taking Y7 as input of a module Reduction-A, further abstracting and extracting the feature Y8 of 17 x 1154 through 3-path convolution and maximum pooling calculation, inputting Y8 into an inclusion-ResNet-B module to obtain a feature Y9 with the same size, inputting Reduction-B to perform a feature size Reduction process to obtain a feature Y10 of 8 x 2048, and inputting Y11 into an inclusion-ResNet-C module to perform final convolution calculation to obtain a feature Y11 of 8 x 2048;
s24, performing pooling operation on the characteristic Y11, performing dropout operation by reserving the node number to be 0.8, and finally classifying the characteristic Y11 in a softmax classifier to obtain two classification outputs of one-dimensional characteristic vectors 1 x 2, mapping the output results to cardiovascular lesions or cerebrovascular lesions of three-year patients respectively, and constructing a three-year disease species prediction model.
S25, repeating the steps of S21-S23, constructing a three-year occurrence risk grade classification model, performing pooling operation on the characteristic Y11, performing dropout operation with the node number kept to be 0.8, and finally classifying the characteristic Y11 in a softmax classifier to obtain output of a one-dimensional characteristic vector 1 x 3, wherein the output result is mapped to the risk of the three-year patient suffering from cardiovascular and cerebrovascular diseases, specifically low risk, medium risk and high risk.
S26, repeating the steps of S21-S25, and constructing a five-year-old patient cardiovascular and cerebrovascular disease type and occurrence risk level prediction model.
Further, the convolution calculation formula in the inclusion-ResNet-V2 model is as follows:
whereinRepresents the value of the jth characteristic map of the ith layer at (x, y); piAnd QiRepresents the size of the convolution kernel used by the ith layer;representing the weight of a convolution kernel with the size of (p, q) connecting the mth characteristic diagram of the i-1 layer and the jth characteristic diagram of the ith layer at the (x + p, y + q) point;
the Relu activation layer calculation formula is as follows:
F(a)=max(0,a)
wherein a represents the input of the active layer, and F (a) represents the output of the active layer;
the calculation formula of the Softmax classifier is as follows:
wherein z isjIs the jth input variable, M is the number of input variables,the probability of outputting a category j is represented for the output.
In the fundus images, fundus blood vessels grow in a network shape, blood vessels are numerous in branches, different in thickness and mutually crossed, and model identification is difficult. However, the thickness and cross-impression of the blood vessels have great prompting effect on the reaction of cardiovascular and cerebrovascular diseases. The model can realize accurate segmentation of more than 95% of fundus blood vessel information, and the expression capacity of the network is improved by adding the nonlinear excitation Relu active layer; the nonlinear characteristic can be greatly increased on the premise of keeping the scale of the convolution image unchanged (namely, resolution is not lost), namely, the nonlinear activation function Relu is utilized, and accurate image segmentation is realized. Meanwhile, the model has a plurality of 1 × 1 convolution operations, so that the function of reducing the dimension is achieved, the model is trained on the basis of reducing the dimension without losing image information, and the information extraction capability of the model is greatly improved. The model also comprises three inclusion-ResNet modules, wherein the inclusion-ResNet modules contain Residual connection Residual error networks, network convergence can be accelerated, the training speed of the network is accelerated, the training errors are gradually reduced along with the increase of the network depth, and the Reduction modules among the inclusion-ResNet modules play a role of a pooling layer and also play a role in dimension Reduction. In addition, the model network of the invention has a softmax branch, even if the hidden unit and the middle layer also participate in the characteristic calculation, the model network can also participate in the result of the prediction picture, thereby playing an adjusting effect in the whole network model and preventing overfitting so as to adapt to the prediction needs of different people for cardiovascular and cerebrovascular diseases based on fundus pictures.
Further, the step S2 inputs the input data into the inclusion-ResNet-V2 model for training, and the specific steps include:
s27, inputting the image data cut from the training data set into an inclusion-ResNet-V2 model, obtaining an output feature vector, namely a classified output feature vector, by the model network through forward propagation, and calculating the output feature vector through a loss function cross entropy formula to obtain loss, wherein the loss function cross entropy formula has the following calculation formula:
wherein, P (x)i) Representing the true probability of belonging to the ith sample, q (x)i) Representing the prediction probability of the ith sample;
and S28, reversely propagating to the model network in a gradient updating mode to update the network parameters, and repeating the updating iteration process of the parameters to enable the classification output of the model to continuously approach to the real label.
Further, in the step S2, the input data is input into the inclusion-ResNet-V2 model for verification, where the verification process includes internal verification and external verification, the internal verification is that in the training process, the data of the verification data set is used for testing at the end of each iteration, and the test result reflects the model training effect of the current iteration; and the external verification is that after the final iteration is finished, the data in the test data set is used for testing, and the test result reflects the final performance effect of the model.
The verification process of the model refers to reflecting the prediction effect of the model through certain indexes, and common classification evaluation indexes comprise accuracy, specificity, sensitivity, positive prediction values, negative prediction values, AUC curves, loss value curves and the like. The indexes are obtained by classifying the input picture data through the model, calculating the deviation between the prediction situation and the actual situation of all input positive and negative sample, and reflecting the prediction capability of the model on the positive and negative sample. In the training process of the model, testing is carried out by using data of the verification data set through internal verification, namely when each iteration is finished, and the classification evaluation index of the test result reflects the model training effect of the iteration. And after the model training process is finished in a certain iteration period, or the evaluation index of the verification set is not promoted any more in a certain iteration period, namely the model training is considered to be finished. And after the final iteration is finished, namely the model training is finished, the effect verification of the model is carried out in the externally tested test set data, and the evaluation index of the model prediction capability of the test data set reflects the final performance effect of the model. In this embodiment, the prediction accuracy of the three-year cardiovascular and cerebrovascular disease species by using the prediction method of this embodiment is up to 90%, the prediction accuracy of the three-year cardiovascular and cerebrovascular disease occurrence risk is up to 95%, the prediction accuracy of the five-year cardiovascular and cerebrovascular disease species is up to 80%, and the prediction accuracy of the five-year cardiovascular and cerebrovascular disease occurrence risk is up to 85%.
In this embodiment, a normal fundus image is shown in fig. 7, and a fundus image of a patient with cardiovascular and cerebrovascular diseases is shown in fig. 8, so that it can be seen that the blood vessels in the fundus image of a patient with cardiovascular and cerebrovascular diseases are thin and have rough branches, and the color system of each part of the fundus image is yellowish, and the contrast is low. Two modes of normal fundus image contrast enhancement are shown in fig. 9 and 10, fig. 11 and 12 respectively show the blood vessel segmentation conditions of the other neural network model for the two modes of fundus image contrast enhancement, so that the integrity of the other neural network model for the blood vessel segmentation of the fundus image is low, and the small blood vessel information is lost, and fig. 13 shows the small blood vessel segmentation conditions of the other neural network model for the two modes of fundus image contrast enhancement, and the ability of the other neural network model for the small blood vessel segmentation of the fundus image is low consistent with the above conditions. Fig. 14 and 15 show that the inclusion-ResNet-V2 model in the prediction method of this embodiment can obviously find that the inclusion-ResNet-V2 model in the prediction method of this embodiment has a strong ability to segment blood vessels of fundus images, and information of small blood vessels can be extracted mostly and accurately, for the blood vessel segmentation conditions of the two modes of enhancing the contrast of fundus images. Therefore, it can be proved that the inclusion-ResNet-V2 model in the method of the present embodiment has obvious advantages over other models, namely, strong blood vessel segmentation capability and high information extraction capability.
Meanwhile, fig. 16 shows that there is a hard exudation problem (bright yellow area in the figure) in the fundus image of a patient with cardiovascular and cerebrovascular diseases, which is important for cardiovascular and cerebrovascular prediction, but the information features of these pigment particles are not obvious, so that human eye identification is difficult, and accurate label labeling cannot be performed, and fig. 17 is an example of early small-range dot-like hard exudation of cardiovascular and cerebrovascular diseases. As shown in fig. 18 and 19, the inclusion-ResNet-V2 model in the prediction method of the present embodiment can effectively extract the hard bleed information in two contrast enhancement modes, but as shown in fig. 20, other neural network models cannot effectively extract the hard bleed information. On the other hand, the inclusion-ResNet-V2 model in the embodiment has higher information extraction capability than other models, and is more favorable for predicting the cardiovascular and cerebrovascular occurrence probability based on the fundus image in the invention.
Example 2
As shown in fig. 5, the present embodiment provides a cardiovascular and cerebrovascular disease occurrence type and risk prediction system based on fundus images, the system includes:
the acquisition module 10 is used for acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases of three-year and five-year occurrence;
the data processing module 20 is used for cutting the fundus images in the sample data set into images with the same size, and dividing the sample data set into a training data set, a verification data set and a test data set;
the model building module 30 is used for inputting the fundus images in the sample data set as input data into the deep learning model inclusion-ResNet-V2 for feature extraction, training and verification by taking the occurrence conditions of cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits as data labels, and building a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and the prediction module 40 is used for inputting the fundus image of the patient to be detected into the constructed prediction model to obtain the prediction result of the patient to be detected, wherein the prediction result comprises the disease category classification and the occurrence risk grade classification of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
Further, as shown in fig. 6, the model building module 30 includes:
the feature extraction module 31 is used for inputting the image data in the training data set into an inclusion-ResNet-V2 model for feature extraction;
the model training module 32 is used for inputting image data in a large number of training data sets into an increment-ResNet-V2 model for training, and the classification output of the model can continuously approach to a real label through continuous updating iteration of network parameters;
a model validation module 33, the model validation module 33 being configured to validate training and performance effects of the model.
Example 3
The present embodiment provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above method for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images, such as steps S1 to S3 shown in fig. 1, when executing the computer program, or implements the functions of the above modules of the system for predicting the occurrence probability of cardiovascular and cerebrovascular diseases based on fundus images when executing the computer program. To avoid repetition, further description is omitted here.
Example 4
The present embodiment provides a storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the above-mentioned cardiovascular and cerebrovascular disease occurrence type and risk prediction method steps based on a fundus image, such as steps S1 to S3 shown in fig. 1, or the processor implements the functions of the above-mentioned cardiovascular and cerebrovascular disease occurrence type and risk prediction system modules based on a fundus image when executing the computer program. To avoid repetition, further description is omitted here.
It is to be understood that the storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and the like.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the technical solutions of the present invention, and are not intended to limit the specific embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention claims should be included in the protection scope of the present invention claims.
Claims (10)
1. A cardiovascular and cerebrovascular disease occurrence type and risk prediction method based on fundus images is characterized by comprising the following steps:
s1, acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases followed for three years and five years, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases occurring for three years and five years;
s2, the concentrated fundus images of the sample data are used as input data, the occurrence conditions of cardiovascular and cerebrovascular diseases followed for three years and five years are used as data labels, the data labels are input into a deep learning model increment-ResNet-V2 for feature extraction, training and verification, and a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images is constructed, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and S3, inputting the fundus image of the patient to be detected into the constructed prediction model, and outputting a prediction result, wherein the prediction result comprises disease category classification and occurrence risk grade classification of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
2. The method for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images according to claim 1, wherein the step S1 of obtaining a sample set of cardiovascular and cerebrovascular diseases comprises the following specific steps: the fundus images in the sample data set are cut into images with the same size, and the sample data set is divided into a training data set, a verification data set and a test data set.
3. The method for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images according to claim 2, wherein the step S2 is performed by inputting input data into an inclusion-ResNet-V2 model, and the specific steps include:
s21, performing primary feature extraction on the image data after the training data are centrally cut through a stem module, specifically, performing convolution calculation with 3 x 3 and step length of 2 on the image data to extract a feature Y1, performing convolution calculation with 3 x 3 twice on Y1 to obtain a feature Y2, performing convolution calculation with 3 x 3 and step length of 2 on Y2 once and convolution calculation with 3 x 3 once respectively, performing feature splicing on the two outputs to obtain a feature Y3, performing convolution calculation with 1 x 1, 3 x 3 and 1 x 1, 7 x 1, 1 x 7 and 3 x 3 respectively on Y3, splicing the output features to obtain a feature Y4, and finally performing convolution calculation with 3 x 3 and maximum convolution operation on Y4 to obtain a feature Y5;
s22, performing feature extraction by taking Y5 as the input of a module increment-ResNet-A, specifically, performing Relu activation layer processing on the feature Y5 output by the stem module, performing 1 × 1, 3 × 3 and 3 × 3 convolution calculation on three output features respectively, performing 1 × 1 linear convolution activation on the three output features to obtain a feature Y6, performing feature splicing on Y6 and the input activated feature Y5, and finally performing Relu activation to obtain an output feature Y7;
s23, taking Y7 as input of a module Reduction-A, further abstracting and extracting characteristics Y8 through 3-path convolution and maximum pooling calculation, inputting Y8 into an inclusion-ResNet-B module to obtain characteristics Y9, inputting Reduction-B to reduce the characteristic size to obtain characteristics Y10, inputting the characteristics Y10 into an inclusion-ResNet-C module to perform final convolution calculation, and obtaining characteristics Y11;
s24, performing pooling operation on the characteristic Y11, performing dropout operation by reserving the node number to be 0.8, and finally entering a softmax classifier for classification to obtain a one-dimensional characteristic vector 1 x 2 output, and mapping the output result to cardiovascular lesions or cerebrovascular lesions of three-year patients respectively to construct a three-year disease category prediction model.
S25, repeating the steps of S21-S23, constructing a three-year occurrence risk grade classification model, performing pooling operation on the characteristic Y11, performing dropout operation with the node number kept to be 0.8, and finally classifying the characteristic Y11 in a softmax classifier to obtain output of a one-dimensional characteristic vector 1 x 3, wherein the output result is mapped to the risk of the three-year patient suffering from cardiovascular and cerebrovascular diseases, specifically low risk, medium risk and high risk.
S26, repeating the steps of S21-S25, and constructing a five-year-old patient cardiovascular and cerebrovascular disease type and occurrence risk level prediction model.
4. The method for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images according to claim 3, wherein the convolution calculation formula in the inclusion-ResNet-V2 model is as follows:
whereinRepresents the value of the jth characteristic map of the ith layer at (x, y); piAnd QiRepresents the size of the convolution kernel used by the ith layer;representing the weight of a convolution kernel with the size of (p, q) connecting the mth characteristic diagram of the i-1 layer and the jth characteristic diagram of the ith layer at the (x + p, y + q) point;
the Relu activation layer calculation formula is as follows:
F(a)=max(0,a)
wherein a represents the input of the active layer, and F (a) represents the output of the active layer;
the calculation formula of the Softmax classifier is as follows:
5. The method for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images according to claim 4, wherein the input data is input into an inclusion-ResNet-V2 model for training in the step S2, and the specific steps comprise:
s27, inputting the image data cut from the training data set into an increment-ResNet-V2 model, obtaining an output feature vector, namely a feature vector output in a classification mode, by a model network through forward propagation, and calculating the output feature vector through a loss function cross entropy formula to obtain loss, wherein the loss function cross entropy formula has the following calculation formula:
wherein, P (x)i) Representing the true probability of belonging to the ith sample, q (x)i) Representing the prediction probability of the ith sample;
and S28, reversely propagating to the model network in a gradient updating mode to update the network parameters, and repeating the updating iterative process of the parameters to enable the classification output of the model to continuously approach to the real label.
6. The method for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images according to claim 5, wherein the input data is input into an inclusion-ResNet-V2 model in the step S2 for verification, wherein the verification process comprises internal verification and external verification, the internal verification is that in the training process, the data of a verification data set is used for testing at the end of each iteration, and the test result reflects the model training effect of the iteration; and the external verification is that after the final iteration is finished, the data in the test data set is used for testing, and the test result reflects the final performance effect of the model.
7. A kind of cardiovascular and cerebrovascular disease takes place and risk prediction system based on fundus image, characterized by that, the said system includes:
the acquisition module is used for acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases occurring for three years and five years;
the data processing module is used for cutting the fundus images in the sample data set into images with the same size, and dividing the sample data set into a training data set, a verification data set and a test data set;
the model building module is used for inputting the fundus images concentrated by the sample data into a deep learning model increment-ResNet-V2 for feature extraction, training and verification by taking the occurrence conditions of cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits as data labels, and building a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and the prediction module is used for inputting the fundus image of the patient to be detected into the constructed prediction model to obtain the prediction result of the patient to be detected, and the prediction result comprises the classification of the disease species and the occurrence risk grade of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
8. The system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images according to claim 7, wherein the model building module comprises:
the feature extraction module is used for inputting the image data in the training data set into an increment-ResNet-V2 model for feature extraction;
the model training module is used for inputting image data in a large number of training data sets into an increment-ResNet-V2 model for training, and enabling the classification output of the model to continuously approach to a real label through continuous updating iteration of network parameters;
and the model verification module is used for verifying the training and performance effects of the model.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program performs the steps of the method according to any of claims 1 to 6.
10. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, realizing the steps of the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011246515.1A CN114549541A (en) | 2020-11-10 | 2020-11-10 | Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011246515.1A CN114549541A (en) | 2020-11-10 | 2020-11-10 | Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114549541A true CN114549541A (en) | 2022-05-27 |
Family
ID=81660132
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011246515.1A Pending CN114549541A (en) | 2020-11-10 | 2020-11-10 | Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114549541A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114864093A (en) * | 2022-07-04 | 2022-08-05 | 北京鹰瞳科技发展股份有限公司 | Apparatus, method and storage medium for disease prediction based on fundus image |
CN115456962A (en) * | 2022-08-24 | 2022-12-09 | 中山大学中山眼科中心 | Choroidal vascular index prediction method and device based on convolutional neural network |
CN115761365A (en) * | 2022-11-28 | 2023-03-07 | 首都医科大学附属北京友谊医院 | Intraoperative hemorrhage condition determination method and device and electronic equipment |
CN116798625A (en) * | 2023-06-26 | 2023-09-22 | 清华大学 | Stroke risk screening device, electronic equipment and storage medium |
CN118115466A (en) * | 2024-03-07 | 2024-05-31 | 珠海全一科技有限公司 | Fundus pseudo focus detection method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046835A (en) * | 2019-12-24 | 2020-04-21 | 杭州求是创新健康科技有限公司 | Eyeground illumination multiple disease detection system based on regional feature set neural network |
-
2020
- 2020-11-10 CN CN202011246515.1A patent/CN114549541A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046835A (en) * | 2019-12-24 | 2020-04-21 | 杭州求是创新健康科技有限公司 | Eyeground illumination multiple disease detection system based on regional feature set neural network |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114864093A (en) * | 2022-07-04 | 2022-08-05 | 北京鹰瞳科技发展股份有限公司 | Apparatus, method and storage medium for disease prediction based on fundus image |
CN115456962A (en) * | 2022-08-24 | 2022-12-09 | 中山大学中山眼科中心 | Choroidal vascular index prediction method and device based on convolutional neural network |
CN115456962B (en) * | 2022-08-24 | 2023-09-29 | 中山大学中山眼科中心 | Choroidal blood vessel index prediction method and device based on convolutional neural network |
CN115761365A (en) * | 2022-11-28 | 2023-03-07 | 首都医科大学附属北京友谊医院 | Intraoperative hemorrhage condition determination method and device and electronic equipment |
CN115761365B (en) * | 2022-11-28 | 2023-12-01 | 首都医科大学附属北京友谊医院 | Method and device for determining bleeding condition in operation and electronic equipment |
CN116798625A (en) * | 2023-06-26 | 2023-09-22 | 清华大学 | Stroke risk screening device, electronic equipment and storage medium |
CN118115466A (en) * | 2024-03-07 | 2024-05-31 | 珠海全一科技有限公司 | Fundus pseudo focus detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114549541A (en) | Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium | |
CN108021916A (en) | Deep learning diabetic retinopathy sorting technique based on notice mechanism | |
CN112085745B (en) | Retina blood vessel image segmentation method of multichannel U-shaped full convolution neural network based on balanced sampling and splicing | |
CN113205524B (en) | Blood vessel image segmentation method, device and equipment based on U-Net | |
Kumar et al. | Redefining Retinal Lesion Segmentation: A Quantum Leap With DL-UNet Enhanced Auto Encoder-Decoder for Fundus Image Analysis | |
Boral et al. | Classification of diabetic retinopathy based on hybrid neural network | |
CN112733961A (en) | Method and system for classifying diabetic retinopathy based on attention mechanism | |
CN111080643A (en) | Method and device for classifying diabetes and related diseases based on fundus images | |
CN112132801B (en) | Lung bulla focus detection method and system based on deep learning | |
CN113012163A (en) | Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network | |
CN112862805B (en) | Automatic auditory neuroma image segmentation method and system | |
CN111028230A (en) | Fundus image optic disc and macula lutea positioning detection algorithm based on YOLO-V3 | |
CN111028232A (en) | Diabetes classification method and equipment based on fundus images | |
Zhang et al. | MC-UNet multi-module concatenation based on U-shape network for retinal blood vessels segmentation | |
CN110992309B (en) | Fundus image segmentation method based on deep information transfer network | |
CN113240677B (en) | Retina optic disc segmentation method based on deep learning | |
CN111047590A (en) | Hypertension classification method and device based on fundus images | |
CN113158822B (en) | Method and device for classifying eye detection data based on cross-modal relation reasoning | |
Dong et al. | Supervised learning-based retinal vascular segmentation by m-unet full convolutional neural network | |
CN117934489A (en) | Fundus hard exudate segmentation method based on residual error and pyramid segmentation attention | |
CN117078697B (en) | Fundus disease seed detection method based on cascade model fusion | |
CN115937192B (en) | Unsupervised retina blood vessel segmentation method and system and electronic equipment | |
CN116883420A (en) | Choroidal neovascularization segmentation method and system in optical coherence tomography image | |
CN116109872A (en) | Blood vessel naming method and device, electronic equipment and storage medium | |
CN115393582A (en) | Fundus image artery and vein vessel segmentation method based on attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |