CN114549541A - Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium - Google Patents

Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium Download PDF

Info

Publication number
CN114549541A
CN114549541A CN202011246515.1A CN202011246515A CN114549541A CN 114549541 A CN114549541 A CN 114549541A CN 202011246515 A CN202011246515 A CN 202011246515A CN 114549541 A CN114549541 A CN 114549541A
Authority
CN
China
Prior art keywords
model
cardiovascular
cerebrovascular diseases
data set
occurrence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011246515.1A
Other languages
Chinese (zh)
Inventor
项毅帆
骞保民
周骞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jian Baomin
Original Assignee
Jian Baomin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jian Baomin filed Critical Jian Baomin
Priority to CN202011246515.1A priority Critical patent/CN114549541A/en
Publication of CN114549541A publication Critical patent/CN114549541A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a method and a system for predicting the occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and a storage medium. The method comprises the following steps: s1, acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of cardiovascular and cerebrovascular diseases followed for three years and five years; s2, the fundus images of the sample data set are used as input data, the occurrence conditions of cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits in the fundus images are used as data labels, the data labels are input into a deep learning model increment-ResNet-V2 to be subjected to feature extraction, training and verification, and a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images is constructed; and S3, inputting the fundus image of the patient to be detected into the constructed prediction model, and outputting a prediction result, wherein the prediction result is the disease category classification and occurrence risk grade classification of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years. The method provided by the invention can be used for predicting the type and risk of cardiovascular and cerebrovascular angiogenesis based on the fundus image, and is simple, convenient, high in prediction accuracy and good in effect.

Description

Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method, a system, computer equipment and a storage medium for predicting the occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images.
Background
Cardiovascular and cerebrovascular diseases are the general names of cardiovascular and cerebrovascular diseases, and generally refer to ischemic or hemorrhagic diseases of heart, brain and systemic tissues caused by hyperlipidemia, blood viscosity, atherosclerosis, hypertension and the like. The cardiovascular and cerebrovascular diseases are common diseases seriously threatening the health of human beings, particularly the middle-aged and old people over 50 years old, have the characteristics of high morbidity, high disability rate and high mortality, even if the most advanced and perfect treatment means at present are applied, more than 50 percent of cerebrovascular accident survivors can not completely take care of the life, the number of people dying from the cardiovascular and cerebrovascular diseases in each year in the world reaches 1500 thousands, and the people live at the first of various causes of death. Early warning of cardiovascular and cerebrovascular diseases is realized, disease management and prevention are carried out in advance, serious injuries such as disability and death caused by cardiovascular diseases can be effectively relieved or even reduced, family, medical and social burdens caused by the diseases are reduced, and great social and economic benefits are achieved.
At present, auxiliary diagnostic technologies such as CT, MRI, angiography, Doppler ultrasound and the like are needed for diagnosing cardiovascular diseases, and the positions and the properties of cardiovascular and cerebrovascular diseases are found. However, there is no report and clinical application of relevant medical technology about the prediction of cardiovascular and cerebrovascular diseases, and the establishment of an effective and clinically applicable cardiovascular and cerebrovascular disease prediction technology has important significance on the effective prevention and control of cardiovascular and cerebrovascular diseases.
The fundus image blood vessels can accurately reflect the whole blood vessel condition, particularly the pathological changes of the capillaries and capillaries, and have important significance for prompting the cardiovascular and cerebrovascular pathological changes. The thickness and cross-impression of the blood vessels also have great prompting effect on reflecting the disease types and severity of cardiovascular and cerebrovascular diseases. In addition, the deposition of pigment particles such as lipid particles in blood vessels indicates high risk of cardiovascular and cerebrovascular diseases, and is of great importance for determining high-risk groups with cardiovascular and cerebrovascular diseases. Therefore, the type and risk of cardiovascular and cerebrovascular angiogenesis can be effectively predicted based on the fundus image.
In the conventional technology, a doctor can perform manual prediction analysis on fundus image data of a patient, but the manual identification difficulty is high, the identification accuracy rate is low in efficiency and slow, and analysis of a large amount of data cannot be performed. With the development of science and technology, artificial intelligence technologies such as deep learning and the like are combined in medical diagnosis, so that doctors can be effectively helped to analyze various data, but color systems of all parts of eye ground images are yellow, the contrast is low, eye ground blood vessels grow in a network shape, blood vessels are numerous in branches, different in thickness and crossed with one another, so that the common deep learning network model is high in difficulty in identifying graph features, poor in information extraction capability and high in universality, and the influence of crowd difference is large.
Disclosure of Invention
The invention aims to overcome at least one defect (deficiency) of the prior art, and provides a cardiovascular and cerebrovascular disease occurrence type and risk prediction method, a system, computer equipment and a storage medium based on fundus images, which are used for realizing the prediction of the cardiovascular and cerebrovascular disease occurrence type and risk based on fundus images, solving the problems of high difficulty in pattern feature recognition, poor information extraction capability and great influence of population difference on universality of a common deep learning network model, and achieving the effect of high prediction accuracy.
In order to solve the technical problems, the invention provides a technical scheme that a cardiovascular and cerebrovascular disease occurrence type and risk prediction method based on fundus images comprises the following steps:
s1, acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases followed for three to five years, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases occurring for three years and five years;
s2, the concentrated fundus images of the sample data are used as input data, the occurrence conditions of cardiovascular and cerebrovascular diseases followed for three years and five years are used as data labels, the data labels are input into a deep learning model increment-ResNet-V2 for feature extraction, training and verification, and a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images is constructed, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and S3, inputting the fundus image of the patient to be detected into the constructed prediction model, and outputting a prediction result, wherein the prediction result comprises disease category classification and occurrence risk grade classification of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
The model increment-ResNet-V2 combines the advantages of the increment and ResNet models, is based on a variant of the early released increment-V3 model, and is a convolutional neural network for obtaining apex accuracy on the basis of ILSVRC image classification. The fundus image content includes the retina, optic disc, macula, arteriovenous vessels and their branches. The color system of each part of the fundus image is yellow, the contrast is low, the model increment-ResNet-V2 has strong picture contrast enhancement capability, weak color difference of different regions can be enhanced highly, information difference information among pixel points is extracted, and accurate positioning and segmentation among different structures and between normal parts and pathological change parts of the same structure are achieved in an auxiliary mode. Meanwhile, the model is trained on the basis of dimension reduction without losing image information, the information extraction capability of the model is greatly improved, in addition, the model does not need to manually determine which filter is used or whether pooling is needed, the parameters are automatically determined by the model, and through the large sample amount learning of the fundus picture, the model automatically and clearly determines what parameters are needed, and which filters are combined, so that the model has better self-adaptive characteristic, and the technical problem can be effectively solved.
Further, the step S1 of obtaining a sample data set of cardiovascular and cerebrovascular diseases includes the specific steps of: the fundus images in the sample data set are cut into images with the same size, and the sample data set is divided into a training data set, a verification data set and a test data set.
The fundus images are cut into images with the same size so as to unify input image data and avoid prediction result deviation caused by inconsistent sizes, the sample data set is divided so as to reasonably distribute the obtained sample data set, the training data set is used for carrying out the characteristic extraction and model training process, the verification data set is used for carrying out the training result of the verification model for updating parameters each time, and the test data set is used for detecting the final prediction effect of the model after the final model updates the parameters.
Further, in the step S2, the input data is input into the inclusion-ResNet-V2 model for feature extraction, and the specific steps include:
s21, performing primary feature extraction on the image data after the training data are centrally cut through a stem module, specifically, performing convolution calculation with 3 x 3 and step length of 2 on the image data to extract a feature Y1, performing convolution calculation with 3 x 3 twice on Y1 to obtain a feature Y2, performing convolution calculation with 3 x 3 and step length of 2 on Y2 once and convolution calculation with 3 x 3 once respectively, performing feature splicing on the two outputs to obtain a feature Y3, performing convolution calculation with 1 x 1, 3 x 3 and 1 x 1, 7 x 1, 1 x 7 and 3 x 3 respectively on Y3, splicing the output features to obtain a feature Y4, and finally performing convolution calculation with 3 x 3 and maximum convolution operation on Y4 to obtain a feature Y5;
s22, performing feature extraction by taking Y5 as the input of a module increment-ResNet-A, specifically, performing Relu activation layer processing on the feature Y5 output by the stem module, performing 1 × 1, 3 × 3 and 3 × 3 convolution calculation on three output features respectively, performing 1 × 1 linear convolution activation on the three output features to obtain a feature Y6, performing feature splicing on Y6 and the input activated feature Y5, and finally performing Relu activation to obtain an output feature Y7;
s23, taking Y7 as input of a module Reduction-A, further abstracting and extracting characteristics Y8 through 3-path convolution and maximum pooling calculation, inputting Y8 into an inclusion-ResNet-B module to obtain characteristics Y9, inputting Reduction-B to reduce the characteristic size to obtain characteristics Y10, inputting the characteristics Y10 into an inclusion-ResNet-C module to perform final convolution calculation, and obtaining characteristics Y11;
s24, performing pooling operation on the characteristic Y11, performing dropout operation by reserving the node number to be 0.8, and finally classifying the characteristic Y11 in a softmax classifier to obtain two classification outputs of one-dimensional characteristic vectors 1 x 2, mapping the output results to cardiovascular lesions or cerebrovascular lesions of three-year patients respectively, and constructing a three-year disease species prediction model.
And S25, repeating the steps of S21-S23, constructing a three-year-occurring risk grade classification model, performing pooling operation on the characteristic Y11, performing dropout operation for keeping the node number to be 0.8, and finally classifying in a softmax classifier to obtain an output of a one-dimensional characteristic vector 1 x 3, wherein the output result is mapped to the risk of the three-year-old patient suffering from cardiovascular and cerebrovascular diseases, specifically low risk, medium risk and high risk.
S26, repeating the steps of S21-S25, and constructing a five-year-old patient cardiovascular and cerebrovascular disease type and occurrence risk level prediction model.
Further, the convolution calculation formula in the inclusion-ResNet-V2 model is as follows:
Figure BDA0002770213640000041
wherein
Figure BDA0002770213640000042
Represents the value of the jth characteristic diagram of the ith layer at (x, y); piAnd QiRepresents the size of the convolution kernel used by the ith layer;
Figure BDA0002770213640000043
representing the weight of a convolution kernel with the size of (p, q) connecting the mth characteristic diagram of the i-1 layer and the jth characteristic diagram of the ith layer at the (x + p, y + q) point;
the Relu activation layer calculation formula is as follows:
F(a)=max(0,a)
wherein a represents the input of the active layer, and F (a) represents the output of the active layer;
the calculation formula of the Softmax classifier is as follows:
Figure BDA0002770213640000044
wherein z isjIs the jth input variable, M is the number of input variables,
Figure BDA0002770213640000045
the probability of outputting a category j is represented for the output.
In the fundus image, fundus blood vessels grow in a network shape, blood vessels are numerous in branches, different in thickness and mutually crossed, and model identification is difficult. However, the thickness and cross-impression of the blood vessels have great prompting effect on the reaction of cardiovascular and cerebrovascular diseases. The model can realize accurate segmentation of more than 95% of fundus blood vessel information, and the expression capacity of the network is improved by adding the nonlinear excitation Relu active layer; the nonlinear characteristic can be greatly increased on the premise of keeping the scale of the convolution image unchanged (namely, resolution is not lost), namely, the nonlinear activation function Relu is utilized, and accurate image segmentation is realized. Meanwhile, the model has a plurality of 1 × 1 convolution operations, so that the function of reducing the dimension is achieved, the model is trained on the basis of reducing the dimension without losing image information, and the information extraction capability of the model is greatly improved. The model also comprises three inclusion-ResNet modules, wherein the inclusion-ResNet modules contain Residual connection Residual error networks, network convergence can be accelerated, the training speed of the network is accelerated, the training errors are gradually reduced along with the increase of the network depth, and the Reduction modules among the inclusion-ResNet modules play a role of a pooling layer and also play a role in dimension Reduction. In addition, the model network of the invention has a softmax branch, even if the hidden unit and the middle layer also participate in the characteristic calculation, the model network can also participate in the result of the prediction picture, thereby playing an adjusting effect in the whole network model and preventing overfitting so as to adapt to the prediction needs of different people for cardiovascular and cerebrovascular diseases based on fundus pictures.
Further, the step S2 inputs the input data into the inclusion-ResNet-V2 model for training, and the specific steps include:
s27, inputting the image data cut from the training data set into an increment-ResNet-V2 model, obtaining an output feature vector, namely a feature vector output in a classification mode, by a model network through forward propagation, and calculating the output feature vector through a loss function cross entropy formula to obtain loss, wherein the loss function cross entropy formula has the following calculation formula:
Figure BDA0002770213640000051
wherein, P (x)i) Representing the true probability of belonging to the ith sample, q (x)i) Representing the prediction probability of the ith sample;
and S28, reversely propagating to the model network in a gradient updating mode to update the network parameters, and repeating the updating iteration process of the parameters to enable the classification output of the model to continuously approach to the real label.
Further, in the step S2, the input data is input into the inclusion-ResNet-V2 model for verification, where the verification process includes internal verification and external verification, the internal verification is that in the training process, the data of the verification data set is used for testing at the end of each iteration, and the test result reflects the model training effect of the current iteration; and the external verification is that after the final iteration is finished, the data in the test data set is used for testing, and the test result reflects the final performance effect of the model.
The verification process of the model refers to reflecting the prediction effect of the model through certain indexes, and common classification evaluation indexes comprise accuracy, specificity, sensitivity, positive prediction values, negative prediction values, AUC curves, loss value curves and the like. The indexes are obtained by classifying the input picture data through the model, calculating the deviation between the prediction situation and the actual situation of all input positive and negative sample, and reflecting the prediction capability of the model on the positive and negative sample. In the training process of the model, testing is carried out by using data of the verification data set through internal verification, namely when each iteration is finished, and the classification evaluation index of the test result reflects the model training effect of the iteration. And after the model training process is finished in a certain iteration period, or the evaluation index of the verification set is not promoted any more in a certain iteration period, namely the model training is considered to be finished. And after the final iteration is finished, namely the model training is finished, the effect verification of the model is carried out in the externally tested test set data, and the evaluation index of the model prediction capability of the test data set reflects the final performance effect of the model.
In order to solve the above technical problems, another technical solution provided by the present invention is a cardiovascular and cerebrovascular disease occurrence category and risk prediction system based on fundus images, the system comprising:
the acquisition module is used for acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases of three-to five-year follow-up visits, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases occurring for three years and five years;
the data processing module is used for cutting the fundus images in the sample data set into images with the same size, and dividing the sample data set into a training data set, a verification data set and a test data set;
the model building module is used for inputting the fundus images concentrated by the sample data into a deep learning model increment-ResNet-V2 for feature extraction, training and verification by taking the occurrence conditions of cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits as data labels, and building a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and the prediction module is used for inputting the fundus image of the patient to be detected into the constructed prediction model to obtain the prediction result of the patient to be detected, and the prediction result comprises the classification of the disease types and the generation risk grade of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
Further, the model building module comprises:
the feature extraction module is used for inputting the image data in the training data set into an increment-ResNet-V2 model for feature extraction;
the model training module is used for inputting image data in a large number of training data sets into an increment-ResNet-V2 model for training, and enabling the classification output of the model to continuously approach to a real label through continuous updating iteration of network parameters;
and the model verification module is used for verifying the training and performance effects of the model.
In order to solve the technical problem, another technical solution provided by the present invention is a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the method steps for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images when executing the computer program.
In order to solve the technical problem, another technical solution of the present invention is a storage medium having a computer program stored thereon, wherein the computer program is executed by a processor to implement the method steps of predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images.
Compared with the prior art, the invention has the beneficial effects that:
1. the method is based on the fundus images, and the fundus images are analyzed and processed through the inclusion-ResNet-V2 neural network model, so that the cardiovascular and cerebrovascular occurrence types and risks are predicted, and the method is simple, convenient and fast without the assistance of other electronic images;
2. the Incep-ResNet-V2 model comprises a nonlinear excitation Relu activation layer, so that the expression capability of the network is improved, and accurate image segmentation is realized;
3. the Incep-ResNet-V2 model comprises a plurality of 1 x 1 convolution operations, which play a role in dimension reduction, so that the model is trained on the basis of dimension reduction without loss of image information, and the information extraction capability of the model is greatly improved;
4. the Incep-ResNet-V2 model comprises an Incep-ResNet module which comprises a residual error network, so that the network convergence can be accelerated, the training speed of the network is accelerated, and the training error is gradually reduced along with the increase of the network depth;
5. according to the invention, softmax is used as a classifier in the inclusion-ResNet-V2 model, an adjustment effect is achieved in the whole network model, overfitting is prevented, so that the prediction requirements of different groups of people on cardiovascular and cerebrovascular diseases based on fundus pictures are met, and the universality of the model is enhanced.
Drawings
Fig. 1 is a schematic flow chart of a prediction method in embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of an inclusion-ResNet-V2 model architecture in embodiment 1 of the present invention.
FIG. 3 is a schematic flow chart of a stem module in the inclusion-ResNet-V2 model in embodiment 1 of the present invention.
Fig. 4 is a schematic flow chart of an inclusion-ResNet-a module in the inclusion-ResNet-V2 model in embodiment 1 of the present invention.
Fig. 5 is a schematic structural diagram of a prediction system in embodiment 2 of the present invention.
Fig. 6 is a schematic structural diagram of a model building module in the prediction system in embodiment 2 of the present invention.
Fig. 7 is a normal fundus image in embodiment 1 of the present invention.
Fig. 8 is a fundus image of a patient with cardiovascular disease in example 1 of the present invention.
Fig. 9 shows one of the modes of contrast enhancement of a normal fundus image in embodiment 1 of the present invention.
Fig. 10 shows one of the modes of contrast enhancement of a normal fundus image in embodiment 1 of the present invention.
Fig. 11 is one of the two modes of vessel segmentation for enhancing the contrast of the fundus image by other neural network models in embodiment 1 of the present invention.
Fig. 12 is one of the two blood vessel segmentation cases of the neural network model for enhancing the contrast of the fundus image according to embodiment 1 of the present invention.
Fig. 13 shows the segmentation of the small blood vessels in the two modes of enhancing the contrast of the fundus image by the neural network model in embodiment 1 of the present invention.
Fig. 14 shows one of the blood vessel segmentation cases of the inclusion-ResNet-V2 model in the prediction method according to embodiment 1 of the present invention for the two modes of contrast enhancement of the fundus image.
Fig. 15 shows one of the blood vessel segmentation cases of the inclusion-ResNet-V2 model in the prediction method according to embodiment 1 of the present invention for the two modes of contrast enhancement of the fundus image.
FIG. 16 is a diagram showing the hard exudation of the fundus image of a patient with cardiovascular disease in example 1 of the present invention.
FIG. 17 shows early stage small range punctate hard exudation of fundus images of patients with cardiovascular diseases in example 1 of the present invention.
Fig. 18 shows the hard bleed information extraction of the inclusion-ResNet-V2 model in the prediction method according to embodiment 1 of the present invention for one of the two modes of contrast enhancement of the fundus image.
Fig. 19 shows the hard bleed information extraction of the inclusion-ResNet-V2 model in the prediction method according to embodiment 1 of the present invention for one of the two modes of contrast enhancement of the fundus image.
Fig. 20 shows the hard exudation information extraction of the fundus image by other neural network models in embodiment 1 of the present invention.
Description of reference numerals: the system comprises an acquisition module 10, a data processing module 20, a model construction module 30, a prediction module 40, a feature extraction module 31, a model training module 32 and a model verification module 33.
Detailed Description
The drawings are only for purposes of illustration and are not to be construed as limiting the invention. For a better understanding of the following embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
Example 1
As shown in fig. 1, the present embodiment provides a method for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, the method including the following steps:
s1, acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases followed for three to five years, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases occurring for three years and five years;
s2, collecting the sample data to obtain fundus images as input data, inputting the fundus images as data labels to a deep learning model increment-ResNet-V2 for feature extraction, training and verification by taking the occurrence conditions of cardiovascular and cerebrovascular diseases followed up for three years and five years as data labels, and constructing a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and S3, inputting the fundus image of the patient to be detected into the constructed prediction model, and outputting a prediction result, wherein the prediction result comprises disease category classification and occurrence risk grade classification of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
The model increment-ResNet-V2 combines the advantages of the increment and ResNet models, is based on a variant of the early released increment-V3 model, and is a convolutional neural network for obtaining apex accuracy on the basis of ILSVRC image classification. The fundus image content includes the retina, optic disc, macula, arteriovenous vessels and their branches. The color system of each part of the fundus image is yellow, the contrast is low, the model increment-ResNet-V2 has strong picture contrast enhancement capability, weak color difference of different regions can be enhanced highly, information difference information among pixel points is extracted, and accurate positioning and segmentation among different structures and between normal parts and pathological change parts of the same structure are achieved in an auxiliary mode. Meanwhile, the model is trained on the basis of dimension reduction without losing image information, the information extraction capability of the model is greatly improved, in addition, the model does not need to manually determine which filter is used or whether pooling is needed, the parameters are automatically determined by the model, and through the large sample amount learning of the fundus picture, the model automatically and clearly determines what parameters are needed, and which filters are combined, so that the model has better self-adaptive characteristic, and the technical problem can be effectively solved.
Further, the step S1 of obtaining a sample data set of cardiovascular and cerebrovascular diseases includes the specific steps of: the fundus images in the sample data set are cut into images with the same size, the fundus images are cut into 299 × 3 channel image data in the embodiment, and the sample data set is divided into a training data set, a verification data set and a test data set.
The fundus images are cut into images with the same size so as to unify input image data and avoid prediction result deviation caused by inconsistent sizes, the sample data set is divided so as to reasonably distribute the obtained sample data set, the training data set is used for carrying out the characteristic extraction and model training process, the verification data set is used for carrying out the training result of the verification model for updating parameters each time, and the test data set is used for detecting the final prediction effect of the model after the final model updates the parameters.
Further, as shown in fig. 2, in this embodiment, in the step S2, the input data is input into the inclusion-ResNet-V2 model for feature extraction, and the specific steps include:
s21, as shown in fig. 3, the 3-channel image data of 299 × 3 after the training data is centrally clipped is subjected to the initial feature extraction by the stem module, specifically, the image data is subjected to the convolution calculation of 3 × 3 with a step size of 2 to extract the feature Y1 of 149 × 32, then the Y1 is subjected to the convolution calculation of 3 × 3 twice to obtain the feature Y2 of 147 × 64, then the maximum pooling operation of 3 × 3 with a step size of 2 and the convolution calculation of 3 × 3 are respectively performed on Y2, then the two outputs are subjected to the feature concatenation to obtain the feature Y3 with the feature size of the channel number increasing unchanged, the feature size is 73 × 160, then the Y3 is subjected to the feature concatenation operation of 1 × 1, 3 × 3 and 1 × 1, 7 × 1, 1 × 7, 3 × 3, and the feature output is spliced to obtain the maximum feature Y3771, and the final feature pool 71 is calculated and the maximum pool 71 is calculated by the convolution operation of Y3, after the characteristics are spliced, characteristics Y5 of 35X 384 are obtained;
s22, extracting features from Y5 as input of the module inclusion-ResNet-a, specifically, as shown in fig. 4, first performing Relu activation layer processing on the feature Y5 output by the stem module, then performing 1 × 1, 3 × 3 and 1 × 1, 3 × 3 three-way convolution calculation, performing 1 × 1 linear convolution activation on three-way output features to obtain a feature Y6, performing feature splicing on Y6 and the input activated feature Y5, and finally performing Relu activation to obtain a feature Y7 outputting 35 × 384;
s23, taking Y7 as input of a module Reduction-A, further abstracting and extracting the feature Y8 of 17 x 1154 through 3-path convolution and maximum pooling calculation, inputting Y8 into an inclusion-ResNet-B module to obtain a feature Y9 with the same size, inputting Reduction-B to perform a feature size Reduction process to obtain a feature Y10 of 8 x 2048, and inputting Y11 into an inclusion-ResNet-C module to perform final convolution calculation to obtain a feature Y11 of 8 x 2048;
s24, performing pooling operation on the characteristic Y11, performing dropout operation by reserving the node number to be 0.8, and finally classifying the characteristic Y11 in a softmax classifier to obtain two classification outputs of one-dimensional characteristic vectors 1 x 2, mapping the output results to cardiovascular lesions or cerebrovascular lesions of three-year patients respectively, and constructing a three-year disease species prediction model.
S25, repeating the steps of S21-S23, constructing a three-year occurrence risk grade classification model, performing pooling operation on the characteristic Y11, performing dropout operation with the node number kept to be 0.8, and finally classifying the characteristic Y11 in a softmax classifier to obtain output of a one-dimensional characteristic vector 1 x 3, wherein the output result is mapped to the risk of the three-year patient suffering from cardiovascular and cerebrovascular diseases, specifically low risk, medium risk and high risk.
S26, repeating the steps of S21-S25, and constructing a five-year-old patient cardiovascular and cerebrovascular disease type and occurrence risk level prediction model.
Further, the convolution calculation formula in the inclusion-ResNet-V2 model is as follows:
Figure BDA0002770213640000101
wherein
Figure BDA0002770213640000102
Represents the value of the jth characteristic map of the ith layer at (x, y); piAnd QiRepresents the size of the convolution kernel used by the ith layer;
Figure BDA0002770213640000103
representing the weight of a convolution kernel with the size of (p, q) connecting the mth characteristic diagram of the i-1 layer and the jth characteristic diagram of the ith layer at the (x + p, y + q) point;
the Relu activation layer calculation formula is as follows:
F(a)=max(0,a)
wherein a represents the input of the active layer, and F (a) represents the output of the active layer;
the calculation formula of the Softmax classifier is as follows:
Figure BDA0002770213640000104
wherein z isjIs the jth input variable, M is the number of input variables,
Figure BDA0002770213640000111
the probability of outputting a category j is represented for the output.
In the fundus images, fundus blood vessels grow in a network shape, blood vessels are numerous in branches, different in thickness and mutually crossed, and model identification is difficult. However, the thickness and cross-impression of the blood vessels have great prompting effect on the reaction of cardiovascular and cerebrovascular diseases. The model can realize accurate segmentation of more than 95% of fundus blood vessel information, and the expression capacity of the network is improved by adding the nonlinear excitation Relu active layer; the nonlinear characteristic can be greatly increased on the premise of keeping the scale of the convolution image unchanged (namely, resolution is not lost), namely, the nonlinear activation function Relu is utilized, and accurate image segmentation is realized. Meanwhile, the model has a plurality of 1 × 1 convolution operations, so that the function of reducing the dimension is achieved, the model is trained on the basis of reducing the dimension without losing image information, and the information extraction capability of the model is greatly improved. The model also comprises three inclusion-ResNet modules, wherein the inclusion-ResNet modules contain Residual connection Residual error networks, network convergence can be accelerated, the training speed of the network is accelerated, the training errors are gradually reduced along with the increase of the network depth, and the Reduction modules among the inclusion-ResNet modules play a role of a pooling layer and also play a role in dimension Reduction. In addition, the model network of the invention has a softmax branch, even if the hidden unit and the middle layer also participate in the characteristic calculation, the model network can also participate in the result of the prediction picture, thereby playing an adjusting effect in the whole network model and preventing overfitting so as to adapt to the prediction needs of different people for cardiovascular and cerebrovascular diseases based on fundus pictures.
Further, the step S2 inputs the input data into the inclusion-ResNet-V2 model for training, and the specific steps include:
s27, inputting the image data cut from the training data set into an inclusion-ResNet-V2 model, obtaining an output feature vector, namely a classified output feature vector, by the model network through forward propagation, and calculating the output feature vector through a loss function cross entropy formula to obtain loss, wherein the loss function cross entropy formula has the following calculation formula:
Figure BDA0002770213640000112
wherein, P (x)i) Representing the true probability of belonging to the ith sample, q (x)i) Representing the prediction probability of the ith sample;
and S28, reversely propagating to the model network in a gradient updating mode to update the network parameters, and repeating the updating iteration process of the parameters to enable the classification output of the model to continuously approach to the real label.
Further, in the step S2, the input data is input into the inclusion-ResNet-V2 model for verification, where the verification process includes internal verification and external verification, the internal verification is that in the training process, the data of the verification data set is used for testing at the end of each iteration, and the test result reflects the model training effect of the current iteration; and the external verification is that after the final iteration is finished, the data in the test data set is used for testing, and the test result reflects the final performance effect of the model.
The verification process of the model refers to reflecting the prediction effect of the model through certain indexes, and common classification evaluation indexes comprise accuracy, specificity, sensitivity, positive prediction values, negative prediction values, AUC curves, loss value curves and the like. The indexes are obtained by classifying the input picture data through the model, calculating the deviation between the prediction situation and the actual situation of all input positive and negative sample, and reflecting the prediction capability of the model on the positive and negative sample. In the training process of the model, testing is carried out by using data of the verification data set through internal verification, namely when each iteration is finished, and the classification evaluation index of the test result reflects the model training effect of the iteration. And after the model training process is finished in a certain iteration period, or the evaluation index of the verification set is not promoted any more in a certain iteration period, namely the model training is considered to be finished. And after the final iteration is finished, namely the model training is finished, the effect verification of the model is carried out in the externally tested test set data, and the evaluation index of the model prediction capability of the test data set reflects the final performance effect of the model. In this embodiment, the prediction accuracy of the three-year cardiovascular and cerebrovascular disease species by using the prediction method of this embodiment is up to 90%, the prediction accuracy of the three-year cardiovascular and cerebrovascular disease occurrence risk is up to 95%, the prediction accuracy of the five-year cardiovascular and cerebrovascular disease species is up to 80%, and the prediction accuracy of the five-year cardiovascular and cerebrovascular disease occurrence risk is up to 85%.
In this embodiment, a normal fundus image is shown in fig. 7, and a fundus image of a patient with cardiovascular and cerebrovascular diseases is shown in fig. 8, so that it can be seen that the blood vessels in the fundus image of a patient with cardiovascular and cerebrovascular diseases are thin and have rough branches, and the color system of each part of the fundus image is yellowish, and the contrast is low. Two modes of normal fundus image contrast enhancement are shown in fig. 9 and 10, fig. 11 and 12 respectively show the blood vessel segmentation conditions of the other neural network model for the two modes of fundus image contrast enhancement, so that the integrity of the other neural network model for the blood vessel segmentation of the fundus image is low, and the small blood vessel information is lost, and fig. 13 shows the small blood vessel segmentation conditions of the other neural network model for the two modes of fundus image contrast enhancement, and the ability of the other neural network model for the small blood vessel segmentation of the fundus image is low consistent with the above conditions. Fig. 14 and 15 show that the inclusion-ResNet-V2 model in the prediction method of this embodiment can obviously find that the inclusion-ResNet-V2 model in the prediction method of this embodiment has a strong ability to segment blood vessels of fundus images, and information of small blood vessels can be extracted mostly and accurately, for the blood vessel segmentation conditions of the two modes of enhancing the contrast of fundus images. Therefore, it can be proved that the inclusion-ResNet-V2 model in the method of the present embodiment has obvious advantages over other models, namely, strong blood vessel segmentation capability and high information extraction capability.
Meanwhile, fig. 16 shows that there is a hard exudation problem (bright yellow area in the figure) in the fundus image of a patient with cardiovascular and cerebrovascular diseases, which is important for cardiovascular and cerebrovascular prediction, but the information features of these pigment particles are not obvious, so that human eye identification is difficult, and accurate label labeling cannot be performed, and fig. 17 is an example of early small-range dot-like hard exudation of cardiovascular and cerebrovascular diseases. As shown in fig. 18 and 19, the inclusion-ResNet-V2 model in the prediction method of the present embodiment can effectively extract the hard bleed information in two contrast enhancement modes, but as shown in fig. 20, other neural network models cannot effectively extract the hard bleed information. On the other hand, the inclusion-ResNet-V2 model in the embodiment has higher information extraction capability than other models, and is more favorable for predicting the cardiovascular and cerebrovascular occurrence probability based on the fundus image in the invention.
Example 2
As shown in fig. 5, the present embodiment provides a cardiovascular and cerebrovascular disease occurrence type and risk prediction system based on fundus images, the system includes:
the acquisition module 10 is used for acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases of three-year and five-year occurrence;
the data processing module 20 is used for cutting the fundus images in the sample data set into images with the same size, and dividing the sample data set into a training data set, a verification data set and a test data set;
the model building module 30 is used for inputting the fundus images in the sample data set as input data into the deep learning model inclusion-ResNet-V2 for feature extraction, training and verification by taking the occurrence conditions of cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits as data labels, and building a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and the prediction module 40 is used for inputting the fundus image of the patient to be detected into the constructed prediction model to obtain the prediction result of the patient to be detected, wherein the prediction result comprises the disease category classification and the occurrence risk grade classification of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
Further, as shown in fig. 6, the model building module 30 includes:
the feature extraction module 31 is used for inputting the image data in the training data set into an inclusion-ResNet-V2 model for feature extraction;
the model training module 32 is used for inputting image data in a large number of training data sets into an increment-ResNet-V2 model for training, and the classification output of the model can continuously approach to a real label through continuous updating iteration of network parameters;
a model validation module 33, the model validation module 33 being configured to validate training and performance effects of the model.
Example 3
The present embodiment provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above method for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images, such as steps S1 to S3 shown in fig. 1, when executing the computer program, or implements the functions of the above modules of the system for predicting the occurrence probability of cardiovascular and cerebrovascular diseases based on fundus images when executing the computer program. To avoid repetition, further description is omitted here.
Example 4
The present embodiment provides a storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the above-mentioned cardiovascular and cerebrovascular disease occurrence type and risk prediction method steps based on a fundus image, such as steps S1 to S3 shown in fig. 1, or the processor implements the functions of the above-mentioned cardiovascular and cerebrovascular disease occurrence type and risk prediction system modules based on a fundus image when executing the computer program. To avoid repetition, further description is omitted here.
It is to be understood that the storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and the like.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the technical solutions of the present invention, and are not intended to limit the specific embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention claims should be included in the protection scope of the present invention claims.

Claims (10)

1. A cardiovascular and cerebrovascular disease occurrence type and risk prediction method based on fundus images is characterized by comprising the following steps:
s1, acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases followed for three years and five years, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases occurring for three years and five years;
s2, the concentrated fundus images of the sample data are used as input data, the occurrence conditions of cardiovascular and cerebrovascular diseases followed for three years and five years are used as data labels, the data labels are input into a deep learning model increment-ResNet-V2 for feature extraction, training and verification, and a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images is constructed, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and S3, inputting the fundus image of the patient to be detected into the constructed prediction model, and outputting a prediction result, wherein the prediction result comprises disease category classification and occurrence risk grade classification of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
2. The method for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images according to claim 1, wherein the step S1 of obtaining a sample set of cardiovascular and cerebrovascular diseases comprises the following specific steps: the fundus images in the sample data set are cut into images with the same size, and the sample data set is divided into a training data set, a verification data set and a test data set.
3. The method for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images according to claim 2, wherein the step S2 is performed by inputting input data into an inclusion-ResNet-V2 model, and the specific steps include:
s21, performing primary feature extraction on the image data after the training data are centrally cut through a stem module, specifically, performing convolution calculation with 3 x 3 and step length of 2 on the image data to extract a feature Y1, performing convolution calculation with 3 x 3 twice on Y1 to obtain a feature Y2, performing convolution calculation with 3 x 3 and step length of 2 on Y2 once and convolution calculation with 3 x 3 once respectively, performing feature splicing on the two outputs to obtain a feature Y3, performing convolution calculation with 1 x 1, 3 x 3 and 1 x 1, 7 x 1, 1 x 7 and 3 x 3 respectively on Y3, splicing the output features to obtain a feature Y4, and finally performing convolution calculation with 3 x 3 and maximum convolution operation on Y4 to obtain a feature Y5;
s22, performing feature extraction by taking Y5 as the input of a module increment-ResNet-A, specifically, performing Relu activation layer processing on the feature Y5 output by the stem module, performing 1 × 1, 3 × 3 and 3 × 3 convolution calculation on three output features respectively, performing 1 × 1 linear convolution activation on the three output features to obtain a feature Y6, performing feature splicing on Y6 and the input activated feature Y5, and finally performing Relu activation to obtain an output feature Y7;
s23, taking Y7 as input of a module Reduction-A, further abstracting and extracting characteristics Y8 through 3-path convolution and maximum pooling calculation, inputting Y8 into an inclusion-ResNet-B module to obtain characteristics Y9, inputting Reduction-B to reduce the characteristic size to obtain characteristics Y10, inputting the characteristics Y10 into an inclusion-ResNet-C module to perform final convolution calculation, and obtaining characteristics Y11;
s24, performing pooling operation on the characteristic Y11, performing dropout operation by reserving the node number to be 0.8, and finally entering a softmax classifier for classification to obtain a one-dimensional characteristic vector 1 x 2 output, and mapping the output result to cardiovascular lesions or cerebrovascular lesions of three-year patients respectively to construct a three-year disease category prediction model.
S25, repeating the steps of S21-S23, constructing a three-year occurrence risk grade classification model, performing pooling operation on the characteristic Y11, performing dropout operation with the node number kept to be 0.8, and finally classifying the characteristic Y11 in a softmax classifier to obtain output of a one-dimensional characteristic vector 1 x 3, wherein the output result is mapped to the risk of the three-year patient suffering from cardiovascular and cerebrovascular diseases, specifically low risk, medium risk and high risk.
S26, repeating the steps of S21-S25, and constructing a five-year-old patient cardiovascular and cerebrovascular disease type and occurrence risk level prediction model.
4. The method for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images according to claim 3, wherein the convolution calculation formula in the inclusion-ResNet-V2 model is as follows:
Figure FDA0002770213630000021
wherein
Figure FDA0002770213630000022
Represents the value of the jth characteristic map of the ith layer at (x, y); piAnd QiRepresents the size of the convolution kernel used by the ith layer;
Figure FDA0002770213630000023
representing the weight of a convolution kernel with the size of (p, q) connecting the mth characteristic diagram of the i-1 layer and the jth characteristic diagram of the ith layer at the (x + p, y + q) point;
the Relu activation layer calculation formula is as follows:
F(a)=max(0,a)
wherein a represents the input of the active layer, and F (a) represents the output of the active layer;
the calculation formula of the Softmax classifier is as follows:
Figure FDA0002770213630000024
wherein z isjIs the jth input variable, M is the number of input variables,
Figure FDA0002770213630000025
the probability of outputting a category j is represented for the output.
5. The method for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images according to claim 4, wherein the input data is input into an inclusion-ResNet-V2 model for training in the step S2, and the specific steps comprise:
s27, inputting the image data cut from the training data set into an increment-ResNet-V2 model, obtaining an output feature vector, namely a feature vector output in a classification mode, by a model network through forward propagation, and calculating the output feature vector through a loss function cross entropy formula to obtain loss, wherein the loss function cross entropy formula has the following calculation formula:
Figure FDA0002770213630000031
wherein, P (x)i) Representing the true probability of belonging to the ith sample, q (x)i) Representing the prediction probability of the ith sample;
and S28, reversely propagating to the model network in a gradient updating mode to update the network parameters, and repeating the updating iterative process of the parameters to enable the classification output of the model to continuously approach to the real label.
6. The method for predicting the occurrence type and risk of cardiovascular and cerebrovascular diseases based on fundus images according to claim 5, wherein the input data is input into an inclusion-ResNet-V2 model in the step S2 for verification, wherein the verification process comprises internal verification and external verification, the internal verification is that in the training process, the data of a verification data set is used for testing at the end of each iteration, and the test result reflects the model training effect of the iteration; and the external verification is that after the final iteration is finished, the data in the test data set is used for testing, and the test result reflects the final performance effect of the model.
7. A kind of cardiovascular and cerebrovascular disease takes place and risk prediction system based on fundus image, characterized by that, the said system includes:
the acquisition module is used for acquiring a sample data set of cardiovascular and cerebrovascular diseases, wherein the sample data set comprises fundus images of past patients and occurrence conditions of the cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits, and the occurrence conditions comprise disease types and disease conditions of the cardiovascular and cerebrovascular diseases occurring for three years and five years;
the data processing module is used for cutting the fundus images in the sample data set into images with the same size, and dividing the sample data set into a training data set, a verification data set and a test data set;
the model building module is used for inputting the fundus images concentrated by the sample data into a deep learning model increment-ResNet-V2 for feature extraction, training and verification by taking the occurrence conditions of cardiovascular and cerebrovascular diseases of three-year and five-year follow-up visits as data labels, and building a cardiovascular and cerebrovascular occurrence type and risk prediction model based on the fundus images, wherein the prediction model comprises a three-year disease type prediction model, a three-year risk prediction model, a five-year disease type prediction model and a five-year risk prediction model;
and the prediction module is used for inputting the fundus image of the patient to be detected into the constructed prediction model to obtain the prediction result of the patient to be detected, and the prediction result comprises the classification of the disease species and the occurrence risk grade of the cardiovascular and cerebrovascular diseases of the patient in the last three years and five years.
8. The system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images according to claim 7, wherein the model building module comprises:
the feature extraction module is used for inputting the image data in the training data set into an increment-ResNet-V2 model for feature extraction;
the model training module is used for inputting image data in a large number of training data sets into an increment-ResNet-V2 model for training, and enabling the classification output of the model to continuously approach to a real label through continuous updating iteration of network parameters;
and the model verification module is used for verifying the training and performance effects of the model.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program performs the steps of the method according to any of claims 1 to 6.
10. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, realizing the steps of the method of any one of claims 1 to 6.
CN202011246515.1A 2020-11-10 2020-11-10 Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium Pending CN114549541A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011246515.1A CN114549541A (en) 2020-11-10 2020-11-10 Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011246515.1A CN114549541A (en) 2020-11-10 2020-11-10 Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114549541A true CN114549541A (en) 2022-05-27

Family

ID=81660132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011246515.1A Pending CN114549541A (en) 2020-11-10 2020-11-10 Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114549541A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114864093A (en) * 2022-07-04 2022-08-05 北京鹰瞳科技发展股份有限公司 Apparatus, method and storage medium for disease prediction based on fundus image
CN115456962A (en) * 2022-08-24 2022-12-09 中山大学中山眼科中心 Choroidal vascular index prediction method and device based on convolutional neural network
CN115761365A (en) * 2022-11-28 2023-03-07 首都医科大学附属北京友谊医院 Intraoperative hemorrhage condition determination method and device and electronic equipment
CN116798625A (en) * 2023-06-26 2023-09-22 清华大学 Stroke risk screening device, electronic equipment and storage medium
CN118115466A (en) * 2024-03-07 2024-05-31 珠海全一科技有限公司 Fundus pseudo focus detection method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046835A (en) * 2019-12-24 2020-04-21 杭州求是创新健康科技有限公司 Eyeground illumination multiple disease detection system based on regional feature set neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046835A (en) * 2019-12-24 2020-04-21 杭州求是创新健康科技有限公司 Eyeground illumination multiple disease detection system based on regional feature set neural network

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114864093A (en) * 2022-07-04 2022-08-05 北京鹰瞳科技发展股份有限公司 Apparatus, method and storage medium for disease prediction based on fundus image
CN115456962A (en) * 2022-08-24 2022-12-09 中山大学中山眼科中心 Choroidal vascular index prediction method and device based on convolutional neural network
CN115456962B (en) * 2022-08-24 2023-09-29 中山大学中山眼科中心 Choroidal blood vessel index prediction method and device based on convolutional neural network
CN115761365A (en) * 2022-11-28 2023-03-07 首都医科大学附属北京友谊医院 Intraoperative hemorrhage condition determination method and device and electronic equipment
CN115761365B (en) * 2022-11-28 2023-12-01 首都医科大学附属北京友谊医院 Method and device for determining bleeding condition in operation and electronic equipment
CN116798625A (en) * 2023-06-26 2023-09-22 清华大学 Stroke risk screening device, electronic equipment and storage medium
CN118115466A (en) * 2024-03-07 2024-05-31 珠海全一科技有限公司 Fundus pseudo focus detection method

Similar Documents

Publication Publication Date Title
CN114549541A (en) Method and system for predicting occurrence types and risks of cardiovascular and cerebrovascular diseases based on fundus images, computer equipment and storage medium
CN108021916A (en) Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN112085745B (en) Retina blood vessel image segmentation method of multichannel U-shaped full convolution neural network based on balanced sampling and splicing
CN113205524B (en) Blood vessel image segmentation method, device and equipment based on U-Net
Kumar et al. Redefining Retinal Lesion Segmentation: A Quantum Leap With DL-UNet Enhanced Auto Encoder-Decoder for Fundus Image Analysis
Boral et al. Classification of diabetic retinopathy based on hybrid neural network
CN112733961A (en) Method and system for classifying diabetic retinopathy based on attention mechanism
CN111080643A (en) Method and device for classifying diabetes and related diseases based on fundus images
CN112132801B (en) Lung bulla focus detection method and system based on deep learning
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
CN112862805B (en) Automatic auditory neuroma image segmentation method and system
CN111028230A (en) Fundus image optic disc and macula lutea positioning detection algorithm based on YOLO-V3
CN111028232A (en) Diabetes classification method and equipment based on fundus images
Zhang et al. MC-UNet multi-module concatenation based on U-shape network for retinal blood vessels segmentation
CN110992309B (en) Fundus image segmentation method based on deep information transfer network
CN113240677B (en) Retina optic disc segmentation method based on deep learning
CN111047590A (en) Hypertension classification method and device based on fundus images
CN113158822B (en) Method and device for classifying eye detection data based on cross-modal relation reasoning
Dong et al. Supervised learning-based retinal vascular segmentation by m-unet full convolutional neural network
CN117934489A (en) Fundus hard exudate segmentation method based on residual error and pyramid segmentation attention
CN117078697B (en) Fundus disease seed detection method based on cascade model fusion
CN115937192B (en) Unsupervised retina blood vessel segmentation method and system and electronic equipment
CN116883420A (en) Choroidal neovascularization segmentation method and system in optical coherence tomography image
CN116109872A (en) Blood vessel naming method and device, electronic equipment and storage medium
CN115393582A (en) Fundus image artery and vein vessel segmentation method based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination