CN115831364A - Type 2 diabetes risk layered prediction method based on multi-modal feature fusion - Google Patents

Type 2 diabetes risk layered prediction method based on multi-modal feature fusion Download PDF

Info

Publication number
CN115831364A
CN115831364A CN202211606506.8A CN202211606506A CN115831364A CN 115831364 A CN115831364 A CN 115831364A CN 202211606506 A CN202211606506 A CN 202211606506A CN 115831364 A CN115831364 A CN 115831364A
Authority
CN
China
Prior art keywords
diabetes
data set
type
risk
blood sugar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211606506.8A
Other languages
Chinese (zh)
Other versions
CN115831364B (en
Inventor
谢怡宁
阙楠双
谢永华
龙俊
王孝东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202211606506.8A priority Critical patent/CN115831364B/en
Publication of CN115831364A publication Critical patent/CN115831364A/en
Application granted granted Critical
Publication of CN115831364B publication Critical patent/CN115831364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a multimode feature fusion based type 2 diabetes risk layered prediction method, and relates to a retina fundus image based type 2 diabetes risk prediction technology, which directly utilizes a fundus retina image to carry out deep learning model training and has weak correlation of extracted features in a model training process, so that the problem of low prediction accuracy is caused. In order to solve the problem, the invention provides a type-2 diabetes risk prediction method based on multi-modal feature fusion, which considers the characteristics of multiple modes, including clinical features, biological index features, blood sugar aspects and predicted blood sugar values, fuses multiple information together, comprehensively considers the information and improves the accuracy of predicting the disease risk. The invention is applied to predicting the risk of type 2 diabetes.

Description

Type 2 diabetes risk layered prediction method based on multi-modal feature fusion
Technical Field
The invention relates to the field of diabetes prediction and diagnosis, and particularly relates to type 2 diabetes risk prediction
Background
In recent years, with the rapid development of economy in China, the quality of life of the masses is greatly improved, the proportion of high-oil and high-fat food intake is increased, and the number of patients with diabetes and the prevalence rate in China are gradually improved. According to statistics, the population suffering from diabetes mellitus in China is increased explosively, and becomes the country with the most diabetes mellitus patients in the world, and under the large background that the aging of the population in China is increasingly intensified, the risk of diabetes mellitus in China is further expanded, which means that the prevention and control of diabetes mellitus in China will face continuous challenges. The key reason for the high lethal and disabling rate of diabetes is that diabetic lesions can cause a plurality of complications, such as renal failure and vascular complications, wherein Coronary Artery Disease (CAD) is the most major large vascular complication, and diabetic retinopathy (diabetes mellitus) is one of the most major microvascular complications. The incidence rate of CAD of the diabetics can reach 55 percent, the CAD is the most common death reason of the diabetics, and DR is a main blinding disease and needs huge social resources and medical resource expenditure. The world health organization classifies diabetes into four categories according to etiology: type 1 and type 2 diabetes, gestational diabetes, and a special type of diabetes. The data show that 90% -95% of patients after age 35 have type 2 diabetes. Type 2 diabetes is a metabolic disorder characterized by hyperglycemia, the incidence of which is related to various factors such as living environment, mode and genetic factors. Because the type 2 diabetes mellitus has a long latent period before clinical diagnosis is confirmed, the establishment of an accurate diabetes mellitus risk prediction model has great significance and value. Through risk prediction of the model, high risk groups are discovered as early as possible without disease prevention, the occurrence of type 2 diabetes can be reduced from the source, the morbidity is reduced, and the type 2 diabetes is effectively prevented. However, the traditional 2-type diabetes risk prediction model based on the retinal fundus image directly extracts features based on the retinal fundus image, cannot comprehensively consider the disease factors of the 2-type diabetes, is not ideal in follow-up after several years, and cannot effectively improve the accuracy of prediction due to insufficient extraction of key features.
The invention provides a type-2 diabetes risk hierarchical prediction method based on multi-modal feature fusion, and aims to effectively solve the problem. When the risk prediction of the type 2 diabetes is carried out on a crowd, the use of a single characteristic is avoided, the characteristics of multiple modes, including clinical characteristics, biological index characteristics, characteristics in the aspect of blood sugar and the prediction of blood sugar value, are considered, multiple information is fused together, the comprehensive consideration is carried out, and the accuracy of the prediction of the disease risk is improved.
Disclosure of Invention
The invention aims to improve the accuracy of predicting type 2 diabetes and provides a method for predicting type 2 diabetes risk stratification based on multi-modal feature fusion.
The above object of the invention is mainly achieved by the following technical scheme:
s1, obtaining a retina fundus image data set which comprises a cross section data set D1 and a data set D2 in a retrospective queue C, wherein the D1 data set is composed of retina fundus images provided by healthy people and people suffering from type 2 diabetes, blood glucose values and clinical information of each person are collected, the retrospective queue C brings healthy people without type 2 diabetes at a baseline two years ago into the queue, the D2 data set is composed of retina fundus images provided by the healthy people at the baseline, and clinical information of each person is collected;
s2, labeling the D1 data set, and taking the blood glucose value of each person as a label corresponding to the retinal fundus image;
s3, labeling the D2 data set, wherein whether the queue C is suffered from the type 2 diabetes 2 after 2 years from the baseline is taken as a basis, the type 2 diabetes is labeled as 1, and otherwise, the type 2 diabetes is labeled as 0;
s4, on the D1 data set, training a blood glucose MODEL MODEL1 based on the fasting blood glucose prediction value by extracting the retinal fundus image features, comparing the predicted blood glucose value with the actual blood glucose value, calculating a loss value, and adjusting parameters in the MODEL1 according to the loss value until the MODEL1 prediction efficiency reaches a predetermined standard;
and S5, training a multimode-based type 2 diabetes risk prediction MODEL MODEL2 by using the data set D2, and layering the risk of diseases in the next two years.
The model training steps are as follows: firstly, training a D2 data set by using a MODEL1 MODEL to obtain a predicted blood sugar value, and extracting blood sugar characteristics of the D2 data set by using a characteristic extraction module in the MODEL1 MODEL; secondly, calculating biological indexes through the retina fundus images in the D2 data set to obtain the characteristics of the biomarkers; then collecting clinical information of the population in the D2 data set as clinical characteristics, and performing characteristic coding on the clinical characteristics, the biomarker characteristics and the predicted blood glucose value; after obtaining the encoded clinical characteristics, biomarker characteristics and predicted blood sugar values, performing multi-modal characteristic fusion on the clinical characteristics, biomarker characteristics and blood sugar characteristics; and finally, inputting the features obtained by fusion into a classifier for prediction, wherein the model output result is the incidence rate of type 2 diabetes in the next two years, and the type 2 diabetes is divided into high risk, medium risk and low risk according to a set threshold.
Effects of the invention
The invention provides a hierarchical prediction method for type 2 diabetes risk based on multi-modal feature fusion. The algorithm firstly utilizes retina fundus image data sets covering healthy people and people suffering from type 2 diabetes mellitus and blood sugar label values thereof to perform pre-training to obtain a MODEL MODEL1 for predicting blood sugar values. Secondly, when the diabetes risk prediction is performed on the target population, the blood sugar value of the target population is predicted by using MODEL1, and the blood sugar aspect characteristics of the target population are extracted. Then, the clinical characteristics of the population are collected, the biological index characteristics of the population are calculated, and the two characteristics and the predicted blood glucose value are coded. And performing multi-modal fusion on the coded features and the blood sugar features. And finally, inputting the fusion characteristics into a classifier to predict the incidence of type 2 diabetes within two years. Experiments show that the method has the characteristic of multi-modal feature fusion, combines the blood sugar aspect features, the predicted blood sugar value, the clinical features and the biological index features, considers a plurality of disease factors of type 2 diabetes, and can improve the accuracy of model prediction.
Drawings
FIG. 1 is a diagram of an implementation of the algorithm herein;
FIG. 2 is a schematic diagram of MODEL1 training process;
FIG. 3 is a schematic diagram of the MODEL2 training process;
detailed description of the invention
The first embodiment is as follows:
in order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for predicting the risk stratification of type 2 diabetes based on multi-modal feature fusion provided herein as shown in fig. 1, wherein the model training comprises the steps of:
s1, obtaining a retina fundus image data set, wherein the data set comprises a cross section data set D1 and a data set D2 in a retrospective queue C, the D1 data set is composed of retina fundus images provided by healthy people and people suffering from type 2 diabetes, blood glucose values and clinical information of each person are collected, the retrospective queue C is a healthy people which takes two years ago as a baseline and does not suffer from type 2 diabetes at the baseline, the D2 data set is composed of retina fundus images provided by the healthy people at the baseline, and clinical information of each person is collected at the same time;
s2, labeling the D1 data set, and taking the blood glucose value of each person as a label corresponding to the fundus image;
s3, labeling the D2 data set, wherein whether the queue C is suffered from the type 2 diabetes 2 after 2 years from the baseline is taken as a basis, the type 2 diabetes is labeled as 1, and otherwise, the type 2 diabetes is labeled as 0;
s4, on the D1 data set, training an inclusion-ResNet-v 1 regression model based on the fasting blood glucose prediction value by extracting the characteristics of the retinal fundus image, and training the prediction performance of the model to a determined standard;
s5, training a multi-mode-based type 2 diabetes risk prediction model by taking the data set D2 as input, obtaining characteristics of the D2 data set in terms of predicted blood sugar value and blood sugar through an inclusion-ResNet-v 1 regression model, calculating a biological index by using a retina fundus image to obtain biomarker characteristics, fusing the three characteristics and clinical information characteristics, inputting the fused characteristics into a classifier, outputting the probability of type 2 diabetes in the next two years, and dividing the probability into low risk, medium risk and high risk according to set thresholds 0.3 and 0.7.
The following examples illustrate the invention in detail:
the embodiment of the invention is realized as follows:
s1, acquiring a retina fundus image data set which comprises a transverse data set D1 and a data set D2 in a retrospective queue C;
the hemoglobin A1c (HbA 1 c) value is more than or equal to 6.5%, and the fasting blood glucose level is more than or equal to 7.0mmol l in at least two times of treatment -1 Or the medical treatment history of diabetes is taken as a standard, and the people suffering from type 2 diabetes are defined.
For the transection data set D1, composed of retinal fundus images of healthy people and patients with type 2 diabetes, and blood glucose values of people as labels of the corresponding retinal fundus images, and clinical information was collected: age, sex, height, weight, blood pressure.
A retrospective cohort C was created, with a baseline of two years ago, and healthy people who did not develop type 2 diabetes two years ago were used as enrollment cohort criteria.
For data set D2, constructed from retinal fundus images at baseline of the population in cohort C, clinical information was collected: age, sex, height, weight, blood pressure.
S2, labeling the D1 data set, and taking the blood glucose value of each person as a label corresponding to the retinal fundus image;
s3, labeling the D2 data set, wherein whether the queue C is suffered from the type 2 diabetes 2 after 2 years from the baseline is taken as a basis, the type 2 diabetes is labeled as 1, and otherwise, the type 2 diabetes is labeled as 0;
s4, as shown in the figure 2, on the D1 data set, an inclusion-ResNet-v 1 regression model based on the prediction of the fasting blood glucose value is trained by extracting the retinal fundus image features, the full connection layer is used as the final layer of the network, the neuron number of the output layer of the full connection layer is 1, and the model prediction performance is trained to a determined standard;
(1) Initializing an increment-ResNet-v 1 regression model parameter;
(2) Then training an inclusion-ResNet-v 1 regression model, inputting the retina fundus image of the D1 data set into the inclusion-ResNet-v 1 regression model, and outputting a prediction result of blood sugar
Figure BDA0003998136240000051
According to
Figure BDA0003998136240000052
And calculating a mean square error loss function MSE with the corresponding blood sugar label value y, wherein an MES calculation formula is as follows:
Figure BDA0003998136240000053
wherein n is the number of data sets;
(3) Optimizing network model parameters by using the back propagation of a loss function until the loss function meets a threshold condition;
and S5, as shown in the figure 3, training a multimode-based type 2 diabetes risk prediction model by taking the data set D2 as input, fusing the predicted blood sugar value, the blood sugar aspect characteristics, the biomarker characteristics and the clinical characteristics, outputting a result of the model, wherein the model is the incidence rate of type 2 diabetes in the next two years, and dividing the model into low risk, medium risk and high risk according to set thresholds 0.3 and 0.7.
The model training steps are as follows:
(1) Training the D2 data set by using an inclusion-ResNet-v 1 regression model to obtain a predicted blood sugar value, and extracting blood sugar aspect characteristics of the D2 data set by using a characteristic extraction module in the inclusion-ResNet-v 1 regression model;
(2) Biological indicators were calculated from the retinal fundus image in the D2 dataset: CRAE, CRVE, fractal dimension, branch angle and tortuosity to obtain the characteristics of the biomarker;
the specific index calculation method is as follows:
(1) CRAE: in the range of 0.5-2 times of diameter of optic disc in the retina fundus image, 6 arteries with the widest diameter are selected, and the specific calculation formula is as follows:
Figure BDA0003998136240000054
wherein w 1 、w 2 The widest and narrowest artery widths of the 6 arteries respectively;
(2) CRVE: in the range of 0.5-2 times of diameter of optic disc in the retina fundus image, 6 veins with the widest diameter are selected, and the specific calculation formula is as follows:
Figure BDA0003998136240000055
wherein w 1 、w 2 The widest and narrowest vein widths of the 6 veins, respectively;
(3) fractal dimension: in the range of 0.5-2 times of diameter of optic disc in the retina fundus image, the distance is obtained by using a box counting method;
(4) the branch angle is as follows: the calculation is not limited by the range, and the specific calculation formula is as follows:
Figure BDA0003998136240000061
wherein d is 0 Is the average width of the parent vessel, d 1 、d 2 The average widths of the two branch vessels respectively;
(5) curvature: the ratio of the length of the blood vessel straight line to the actual length;
(3) Collecting clinical information of a patient in the D2 data set as clinical characteristics, and coding the clinical characteristics, the biomarker characteristics and the predicted blood glucose value according to a One-Hot coding mode;
(4) Performing multi-mode feature fusion on the coded clinical features, biomarker features and predicted blood glucose values and blood glucose features;
(5) And inputting the features obtained by fusion into a softmax classifier for prediction after passing through a full connection layer, wherein the model output result is the incidence rate of type 2 diabetes within the next two years, and the incidence rate is divided into high risk, medium risk and low risk according to a set threshold value.
The method has the characteristic of multi-modal feature fusion, carries out multi-modal fusion on the blood sugar aspect features, the predicted blood sugar values, the clinical features and the biological index features, simultaneously considers a plurality of disease factors of type 2 diabetes, improves the usefulness of the features and can improve the accuracy of model prediction.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (2)

1. The method for predicting the risk stratification of type 2 diabetes based on multi-modal feature fusion is characterized by comprising the following steps of:
s1, obtaining a retina fundus image data set, wherein the data set comprises a cross section data set D1 and a data set D2 in a retrospective queue C, the D1 data set is composed of retina fundus images provided by healthy people and people suffering from type 2 diabetes, blood glucose values and clinical information of each person are collected, the retrospective queue C is a healthy people which takes two years ago as a baseline and does not suffer from type 2 diabetes at the baseline, the D2 data set is composed of retina fundus images provided by the healthy people at the baseline, and clinical information of each person is collected at the same time;
s2, labeling the D1 data set, and taking the blood glucose value of each person as a label corresponding to the retinal fundus image;
s3, labeling the D2 data set, wherein whether the queue C is suffered from the type 2 diabetes 2 after 2 years from the baseline is taken as a basis, the type 2 diabetes is labeled as 1, and otherwise, the type 2 diabetes is labeled as 0;
s4, on the D1 data set, training a blood sugar MODEL MODEL1 based on the fasting blood sugar value prediction by extracting the retinal fundus image characteristics, comparing the blood sugar value prediction with the actual blood sugar value, calculating a loss value, and adjusting parameters in the MODEL1 according to the loss value until the MODEL1 prediction efficiency reaches a set standard;
and S5, taking the D2 data set as input, training a multimode-based type 2 diabetes risk prediction MODEL MODEL2, firstly predicting the blood sugar value corresponding to the D2 data set by using MODEL1, simultaneously extracting the blood sugar aspect characteristics of the D2 data set, secondly calculating biological indexes by using a retina fundus image to obtain biomarker characteristics, carrying out multimode characteristic fusion on the three characteristics and clinical characteristics, outputting a result of the MODEL 2MODEL to the incidence rate of type 2 diabetes in the next two years, and dividing the incidence rate into high risk, medium risk and low risk according to a set threshold.
2. The method for hierarchical prediction of type 2 diabetes risk based on multimodal feature fusion as claimed in claim 1, wherein the training of the multimodal type 2 diabetes risk prediction model in step S5 comprises the following steps:
s501, taking the D2 data set as the input of the pre-trained MODEL1 MODEL to obtain the predicted blood sugar value, and simultaneously extracting the blood sugar aspect characteristics of the D2 data set by using a characteristic extraction module in the MODEL1 MODEL;
s502, calculating biological indexes through the retina fundus images in the D2 data set to obtain the characteristics of the biomarkers;
s503, collecting clinical information of people in the D2 data set as clinical characteristics, and performing characteristic coding on the clinical characteristics, the biomarker characteristics and the predicted blood glucose value;
s504, performing multi-mode feature fusion on the clinical features and the biomarker features obtained by encoding and the characteristics of the predicted blood glucose value and the blood glucose;
and S505, inputting the features obtained by fusion into a classifier for prediction, wherein the features contain predicted blood sugar information, so that the classifier can be assisted in prediction to improve the accuracy of predicting the risk of diseases.
CN202211606506.8A 2022-12-14 2022-12-14 Multi-modal feature fusion-based type 2 diabetes risk stratification prediction method Active CN115831364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211606506.8A CN115831364B (en) 2022-12-14 2022-12-14 Multi-modal feature fusion-based type 2 diabetes risk stratification prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211606506.8A CN115831364B (en) 2022-12-14 2022-12-14 Multi-modal feature fusion-based type 2 diabetes risk stratification prediction method

Publications (2)

Publication Number Publication Date
CN115831364A true CN115831364A (en) 2023-03-21
CN115831364B CN115831364B (en) 2023-09-08

Family

ID=85547247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211606506.8A Active CN115831364B (en) 2022-12-14 2022-12-14 Multi-modal feature fusion-based type 2 diabetes risk stratification prediction method

Country Status (1)

Country Link
CN (1) CN115831364B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116504394A (en) * 2023-06-21 2023-07-28 天津医科大学朱宪彝纪念医院(天津医科大学代谢病医院、天津代谢病防治中心) Auxiliary medical method and device based on multi-feature fusion and computer storage medium
CN116721760A (en) * 2023-06-12 2023-09-08 东北林业大学 Biomarker-fused multitasking diabetic retinopathy detection algorithm
CN116913524A (en) * 2023-09-08 2023-10-20 中国人民解放军总医院第一医学中心 Method and system for predicting diabetic nephropathy based on retinal vascular imaging
CN117253614A (en) * 2023-11-14 2023-12-19 天津医科大学朱宪彝纪念医院(天津医科大学代谢病医院、天津代谢病防治中心) Diabetes risk early warning method based on big data analysis

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160357934A1 (en) * 2014-11-14 2016-12-08 Humana Inc. Diabetes onset and progression prediction using a computerized model
CN109691979A (en) * 2019-01-07 2019-04-30 哈尔滨理工大学 A kind of diabetic retina image lesion classification method based on deep learning
CN112712895A (en) * 2021-02-04 2021-04-27 广州中医药大学第一附属医院 Data analysis method of multi-modal big data for type 2 diabetes complications
CN114724716A (en) * 2021-04-20 2022-07-08 山东大学齐鲁医院 Method, model training and apparatus for risk prediction of progression to type 2 diabetes
CN115171893A (en) * 2022-06-30 2022-10-11 宁夏回族自治区人民医院(宁夏眼科医院、西北民族大学第一附属医院) Diabetes patient assessment and management system based on big data analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160357934A1 (en) * 2014-11-14 2016-12-08 Humana Inc. Diabetes onset and progression prediction using a computerized model
CN109691979A (en) * 2019-01-07 2019-04-30 哈尔滨理工大学 A kind of diabetic retina image lesion classification method based on deep learning
CN112712895A (en) * 2021-02-04 2021-04-27 广州中医药大学第一附属医院 Data analysis method of multi-modal big data for type 2 diabetes complications
CN114724716A (en) * 2021-04-20 2022-07-08 山东大学齐鲁医院 Method, model training and apparatus for risk prediction of progression to type 2 diabetes
CN115171893A (en) * 2022-06-30 2022-10-11 宁夏回族自治区人民医院(宁夏眼科医院、西北民族大学第一附属医院) Diabetes patient assessment and management system based on big data analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M. S. ROOBINI: "Autonomous prediction of Type 2 Diabetes with high impact of glucose level", 《COMPUTERS AND ELECTRICAL ENGINEERING 》, vol. 101, pages 1 - 16 *
夏庭伟 等: "2型糖尿病并发肾病中西医多模态特征融合预测模型构建", 《中华中医药杂志》, vol. 37, no. 7, pages 4116 - 4120 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721760A (en) * 2023-06-12 2023-09-08 东北林业大学 Biomarker-fused multitasking diabetic retinopathy detection algorithm
CN116721760B (en) * 2023-06-12 2024-04-26 东北林业大学 Biomarker-fused multitasking diabetic retinopathy detection algorithm
CN116504394A (en) * 2023-06-21 2023-07-28 天津医科大学朱宪彝纪念医院(天津医科大学代谢病医院、天津代谢病防治中心) Auxiliary medical method and device based on multi-feature fusion and computer storage medium
CN116504394B (en) * 2023-06-21 2024-01-30 天津医科大学朱宪彝纪念医院(天津医科大学代谢病医院、天津代谢病防治中心) Auxiliary medical method and device based on multi-feature fusion and computer storage medium
CN116913524A (en) * 2023-09-08 2023-10-20 中国人民解放军总医院第一医学中心 Method and system for predicting diabetic nephropathy based on retinal vascular imaging
CN116913524B (en) * 2023-09-08 2023-12-26 中国人民解放军总医院第一医学中心 Method and system for predicting diabetic nephropathy based on retinal vascular imaging
CN117253614A (en) * 2023-11-14 2023-12-19 天津医科大学朱宪彝纪念医院(天津医科大学代谢病医院、天津代谢病防治中心) Diabetes risk early warning method based on big data analysis
CN117253614B (en) * 2023-11-14 2024-01-26 天津医科大学朱宪彝纪念医院(天津医科大学代谢病医院、天津代谢病防治中心) Diabetes risk early warning method based on big data analysis

Also Published As

Publication number Publication date
CN115831364B (en) 2023-09-08

Similar Documents

Publication Publication Date Title
CN115831364A (en) Type 2 diabetes risk layered prediction method based on multi-modal feature fusion
WO2021120936A1 (en) Chronic disease prediction system based on multi-task learning model
CN111261282A (en) Sepsis early prediction method based on machine learning
CN111968741B (en) Deep learning and integrated learning-based diabetes complication high-risk early warning system
CN110246577B (en) Method for assisting gestational diabetes genetic risk prediction based on artificial intelligence
CN110277167A (en) The Chronic Non-Communicable Diseases Risk Forecast System of knowledge based map
CN112133441A (en) Establishment method and terminal of MH post-operation fissure hole state prediction model
CN111080643A (en) Method and device for classifying diabetes and related diseases based on fundus images
CN114023449A (en) Diabetes risk early warning method and system based on depth self-encoder
CN110400610B (en) Small sample clinical data classification method and system based on multichannel random forest
WO2022166158A1 (en) System for performing long-term hazard prediction on hemodialysis complications on basis of convolutional survival network
CN113470816A (en) Machine learning-based diabetic nephropathy prediction method, system and prediction device
CN111028232A (en) Diabetes classification method and equipment based on fundus images
Zhang et al. Nonlaboratory-based risk assessment model for type 2 diabetes mellitus screening in Chinese rural population: a joint bagging-boosting model
CN112288749A (en) Skull image segmentation method based on depth iterative fusion depth learning model
CN112991320A (en) System and method for predicting hematoma expansion risk of cerebral hemorrhage patient
CN117315258A (en) Lightweight retinal vessel segmentation method based on graph convolution network and partial convolution
CN112712895B (en) Data analysis method of multi-modal big data aiming at type 2 diabetes complications
CN116779091A (en) Automatic generation method of multi-mode network interconnection and fusion chest image diagnosis report
CN111081334A (en) Chronic disease early warning method based on risk factor probability combination analysis
He et al. Quantification of cognitive function in Alzheimer’s disease based on deep learning
CN116403714B (en) Cerebral apoplexy END risk prediction model building method and device, END risk prediction system, electronic equipment and medium
CN110992309B (en) Fundus image segmentation method based on deep information transfer network
CN115547502B (en) Hemodialysis patient risk prediction device based on time sequence data
Chaturvedi et al. An Innovative Approach of Early Diabetes Prediction using Combined Approach of DC based Bidirectional GRU and CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant