CN113345581A - Integrated learning-based cerebral apoplexy thrombolysis post-hemorrhage probability prediction method - Google Patents
Integrated learning-based cerebral apoplexy thrombolysis post-hemorrhage probability prediction method Download PDFInfo
- Publication number
- CN113345581A CN113345581A CN202110525660.1A CN202110525660A CN113345581A CN 113345581 A CN113345581 A CN 113345581A CN 202110525660 A CN202110525660 A CN 202110525660A CN 113345581 A CN113345581 A CN 113345581A
- Authority
- CN
- China
- Prior art keywords
- model
- training
- folds
- fold
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Pathology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A method for predicting the bleeding probability after thrombolysis of cerebral apoplexy based on ensemble learning comprises the steps of preprocessing original data to enable the data used for model training to be sufficiently simplified and simultaneously enable the original data to be completely represented; inputting the preprocessed data into four sub-learners, respectively training the data to generate a primary result, and summarizing the results; and finally, inputting the summary result obtained in the last step into a Catboost model, and training to obtain a final probability prediction model. The invention analyzes a plurality of factors related to the bleeding probability after cerebral apoplexy thrombolysis theoretically and practically, extracts key factors, and trains a prediction model with higher accuracy by virtue of the performance advantage of integrated learning. By means of deep exploration on the correlation among the characteristics and reasonable utilization of ensemble learning, the model finally presented by the method can provide reliable theoretical guidance for doctors clinically.
Description
Technical Field
The invention relates to a cerebral apoplexy thrombolysis post-hemorrhage probability prediction method based on ensemble learning.
Background
Stroke means that cerebral vessels are blocked by thrombus, and once the stroke happens, the stroke causes great damage to the life health of a patient. The clinical common treatment method is to use thrombolytic medicaments for thrombolysis, but after the thrombolytic medicaments are used, serious bleeding complications are caused with a certain probability, and the probability of serious disability or death of patients with bleeding exceeds 90 percent. At present, doctors generally tell patients that the thrombolysis bleeding probability of their family members is mostly obtained according to their own experience, and there is no substantial judgment basis for individual patients, and each doctor has its own judgment standard, and the randomness is large. For such a situation, an effective prediction method is urgently needed clinically.
In recent years, with the advent of AlphaGo and other products, machine learning is more and more popular, and there are more and more involved fields, such as natural language processing, image style migration, and the like, and all of them play important roles. The ensemble learning is a large class in machine learning, and completes learning tasks by constructing a plurality of learners, and is mainly characterized in that: 1) the types of the sub-learners are not limited, the integrated total learner can be homogeneous (the types of the sub-learners are the same) or heterogeneous (the types of the self-learners are different), and the performance and the compatibility of the integrated learning are improved; 2) generally, the generalization performance of the method can be remarkably superior to that of a single learner through integration, and if the sub-learners are weaker, the superiority is more obvious.
Disclosure of Invention
In order to overcome the defects of the prior art and provide a reliable basis for a doctor to judge the bleeding probability of a cerebral apoplexy patient after thrombolysis, the invention provides an integrated learning-based cerebral apoplexy thrombolysis bleeding probability prediction method, an integrated learning model, a plurality of factors of the patient are considered, and the final judgment is output.
In order to solve the technical problems, the invention provides the following technical scheme:
a method for predicting the bleeding probability after thrombolytic stroke based on ensemble learning, comprising the following steps:
(1) input characteristic processing: after the original data are obtained, processing such as removing irrelevant features is required, and the performance of the final model is improved;
(2) training a single model: respectively training the preprocessed data sets by using four sub-learners to obtain respective prediction results, and waiting for the next processing; the process is as follows:
firstly, dividing a data set into a plurality of folds, and setting the total number of the folds to be (N +1), wherein in the (N +1) folds, the first N folds contain the same number of samples, and the (N +1) fold does not have the requirement, but contains the number of samples which is as close to the first F folds as possible;
for each sub-learner, a total of N training passes are required; in the T training, the training set is the top N folds with the folds removedTThe remaining (N-1) folds, the trained model will be used to predict the foldsTAnd foldN+1Let foldTAnd foldN+1The number of samples contained is M and M respectively, and after one round of training, two vectors are obtained, namely the model about the fold respectivelyTPredicted result of (2)And to foldsN+1Predicted result of (2)After N rounds of training, allThe splicing is carried out to obtain the predicted results of the sub-learner on the first N foldsFor allTaking the average value, the child learner can be obtained about the foldN+1Final predicted result of (2)
Because four sub-learners are adopted, two prediction matrixes are obtained after the step of single model training, namely the prediction result matrixes of the first N foldsAnd to foldN+1Is predicted by the prediction result matrix
(3) And (3) multi-model fusion: fusing the output result of the last step according to a specific method, and then outputting the final probability prediction; the operation of the multi-model fusion is as follows:
construct a new learning deviceTo learn the output of each sub-learner,is a Catboost model whose training set is in step (2)Matrix, the verification set isMatrix, trainedThe final model is used for predicting a new sample; when a new sample is predicted, the new sample needs to be processed in the steps (1) and (2) as well, and finallyThe received input is the initial prediction of the single model training output in the step (2), and then the probability prediction of the hemorrhage after thrombolysis can be output.
Further, in the step (1), the method for processing the input features includes the following steps:
(1-1) for conventional characteristics, the conventional characteristics can be directly input into a model without any processing;
(1-2) directly eliminating useless features so as to prevent the useless features from polluting a trained model;
(1-3) merging the relevant features;
(1-4) continuous characteristic standardization, namely compressing numerical type characteristics to the same range;
(1-5) discrete feature one-hot coding.
The integrated learning model provided by the invention comprises four sub-learners, namely Catboost, XGboost, LightGBM and FactorizionNachine. It should be noted that the four sub-learners are, in addition to the FactorizionNachine, three other sub-learners are themselves an integration of the plurality of sub-learners, and therefore, from this point of view, the present invention can be regarded as "integrated integration". For the avoidance of ambiguity, hereinafter, the sub-learners are referred to as the four frames, unless otherwise specified.
Ensemble learning can be divided into two categories according to the generation mode of the sub-learner: serial and parallel. Strong dependency exists between serially generated sub-learners, and none exists. The three integrated learning frameworks (except FactorizionNachine) related by the invention are all serial generation sub-learners, and particularly belong to the Boosting family of algorithms.
The earliest Boosting algorithm was AdaBoost, which appeared in 1997. During training, the AdaBoost gives the same initial weight to each sample, and after each round of learning device is trained, the weight of each sample is adjusted according to the performance of the learning device, so that the weights of the classified wrong samples are increased, more attention can be paid to the learning devices in the follow-up learning device, a plurality of learning devices are trained according to the process, and finally, weighting combination is carried out. In 2001, Gradient Boosting was introduced, which is generally similar to AdaBoost, the greatest difference being that in each round the learner is given more attention to the method of classifying the erroneous samples in the previous round. AdaBoost adds weight to the error samples, and Gradient Boosting corrects the errors of the previous round by fitting the negative Gradient of the previous round.
XGBoost appears in 2015, is called as Extreme Gradient Boosting, is a new implementation of Gradient Boosting, and mainly has the following difference in loss function: 1) a regularization term is added on the basis of the original loss function, and a new target function is generated; 2) the objective function is subjected to a second order taylor expansion and optimized in a newton-like manner. Equation (1) is the objective function of the mth learner in the XGboost, where N is the total number of samples, Ω (f)m) Is a newly added regularization term.
LightGBM was proposed by microsoft in 2017 and is known as Light Gradient Boosting Machine, the name of which reveals its greatest feature, Light. Compared with the XGboost, the LightGBM has higher training speed and accuracy and lower memory consumption. Besides, the method also supports distributed computing, and can quickly process mass data. In general, the principle of the LightGBM is almost the same as that of the XGBoost, and the LightGBM is used for fitting a negative gradient, and is mainly optimized in the aspects of multithreading, decision trees and the like.
Catboost was sourced by Russian's search giant Yandex in 2017 in 4 months, which is the most mainstream three of the Boosting algorithm family with XGboost and LightGBM. The CatBOost is a category Boost, is based on a symmetrical tree, has better number accuracy than XGboost and LightGBM, and mainly solves the problem that the pain point is to efficiently and reasonably process the class-type characteristics. Compared with the former two, the biggest innovation of Catboost is that a symmetric tree is used.
Factorizionmechine (FM), a factorA decomposition machine. In a traditional linear model, each feature is independent, if the interaction between the features needs to be considered, the features may need to be cross-combined manually, and when the feature dimension is too high, the method is not feasible; the nonlinear SVM can perform kernel mapping on the features, but learning cannot be well performed under the condition that the features are highly sparse; other decomposition models such as matrix decomposition MF, SVD + +, etc. do learn the cross-hidden relationships between features, but basically each model is only used for a specific scene. For this reason, FM has emerged in highly sparse data scenarios such as recommendation systems. Equation (2) is the objective function of FM, where the most critical component is<Vi,Vj>xixj,<Vi,Vj>Is the dot product, x, of the ith and jth rows of the matrix ViAnd xjThe two characteristics are obtained, and all the characteristics of the sample have different degrees of relation with each other through the operation, so that the hidden relation among the characteristics can be conveniently found out.
The invention has the beneficial effects that: analyzing a plurality of factors related to the bleeding probability after cerebral apoplexy thrombolysis theoretically and practically, extracting key factors, and training a prediction model with higher accuracy by virtue of the performance advantage of integrated learning; by means of deep exploration on the correlation among the characteristics and reasonable utilization of ensemble learning, the model finally presented by the method can provide reliable theoretical guidance for doctors clinically.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a view of data preprocessing according to the present invention.
FIG. 3 is a single model training view of the present invention.
Detailed description of the preferred embodiments
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 3, a method for predicting bleeding probability after thrombolysis of cerebral apoplexy based on ensemble learning includes the following steps:
1) input feature processing, as shown in fig. 1, in an original feature space, features can be classified into five major categories according to subsequent processing that is required: a) conventional characteristics, which can be directly input into the model without any processing; b) useless characteristics, according to experience, some characteristics, such as blood type and the like, obviously do not play any positive role on the target, and for the characteristics, the method of the invention directly eliminates the characteristics so as to prevent the characteristics from polluting the trained model; c) relevant features, in many scenes, original data contain some features with similar implied meanings, such as the blood sugar level and the diabetes, if the relevant features exist as original, the same features implied by the relevant features may occupy too large weight in a final model, so that the correct judgment of the model on the result is influenced, and for the situation, the relevant features are merged; d) continuous feature normalization, a large class of features common in raw data are continuous numerical features, but magnitude differences between different numerical features, such as systolic blood pressure and BMI, are not problematic in nature, but when used to train a model, larger values tend to take more weight. Therefore, aiming at the situation, the invention selects to carry out characteristic standardization and compresses numerical characteristics into the same range; e) discrete features are one-hot coded, which are categorical features such as gender. For class type features, one approach is to assign a value to each class in natural number order to represent the class. However, as mentioned in d), although the difference between the representative numbers of each category is not particularly large, the representative numbers with different sizes always have certain influence on the result, so that the invention chooses to process the category type features by using the one-hot coding, although the memory load is increased, the rationality of the model is greatly improved;
2) single model training, FIG. 2 shows the learning process of one of the sub-learners during the single model training process. Training is in commonAnd (3) N is set according to specific conditions, or a plurality of N values are selected, and then the optimal N value is selected according to the final model effect, as shown in FIG. 2, the data set after the previous step of feature preprocessing is divided into N +1 folds, wherein the first N folds are used as a training set, and the N +1 fold is used as a verification set for verifying the performance of the model. Let the number of samples of each fold in the training set be M, the number of samples of the N +1 th fold be M, and in fig. 2, the right side is the total single-model training process, and the four sub-learners are used for training respectively, and then the results of all the sub-learners are integrated to generate the final prediction matrix. On the left is the detailed training process for a single learner, as shown, a total of N rounds of training are performed, in the T-th round of training, foldT(the Tth fold) is a verification set, the rest N-1 folds are training sets, and a primary model obtained after the sub-learner trains on the further divided training sets is used for the foldsTAnd foldN+1Performing prediction to generate two prediction vectors, as obtained in the first training roundAndoperating according to the steps, after N rounds of training, combining the predicted results of the first N folds by the sub-learner to obtain the result aiming at the folds1To foldNFor the fold of the prediction vectorN+1The prediction vector can be composed of NThe vectors are obtained through averaging, and after all the sub-learners are trained in N rounds, the obtained results are combined to obtain a corresponding prediction matrix;
3) multi-model fusion, namely fusing the result output in the last step according to a specific method, and then outputting the final probability prediction; the operation of the multi-model fusion is as follows:
construct a new learning deviceTo learn the output of each sub-learner,is a Catboost model whose training set is in step (2)Matrix, the verification set isMatrix, trainedThe final model is used for predicting a new sample; when a new sample is predicted, the new sample needs to be processed in the steps (1) and (2) as well, and finallyThe received input is the initial prediction of the single model training output in the step (2), and then the probability prediction of the hemorrhage after thrombolysis can be output.
In the embodiment, a plurality of factors related to the bleeding probability after cerebral apoplexy thrombolysis are analyzed theoretically and practically, key factors are extracted, and a prediction model with higher accuracy is trained by means of the performance advantage of ensemble learning; by means of deep exploration on the correlation among the characteristics and reasonable utilization of ensemble learning, the model finally presented by the method can provide reliable theoretical guidance for doctors clinically.
Claims (2)
1. A method for predicting the bleeding probability after thrombolytic stroke based on ensemble learning is characterized by comprising the following steps:
(1) input characteristic processing: after the original data are obtained, processing such as removing irrelevant features is required, and the performance of the final model is improved;
(2) training a single model: respectively training the preprocessed data sets by using four sub-learners to obtain respective prediction results, and waiting for the next processing; the process is as follows:
firstly, dividing a data set into a plurality of folds, and setting the total number of the folds to be (N +1), wherein in the (N +1) folds, the first N folds contain the same number of samples, and the (N +1) fold does not have the requirement, but contains the number of samples which is as close to the first F folds as possible;
for each sub-learner, a total of N training passes are required; in the T training, the training set is the top N folds with the folds removedTThe remaining (N-1) folds, the trained model will be used to predict the foldsTAnd foldN+1Let foldTAnd foldN+1The number of samples contained is M and M respectively, and after one round of training, two vectors are obtained, namely the model about the fold respectivelyTPredicted result of (2)And to foldsN+1Predicted result of (2)After N rounds of training, allThe splicing is carried out to obtain the predicted results of the sub-learner on the first N foldsFor allTaking the average value, the child learner can be obtained about the foldN+1Final predicted result of (2)
Because four sub-learners are adopted, after the step of single model training, two prediction matrixes are obtained, namely the predictions of the first N foldsMatrix of measurement resultsAnd to foldN+1Is predicted by the prediction result matrix
(3) And (3) multi-model fusion: fusing the output result of the last step according to a specific method, and then outputting the final probability prediction; the operation of the multi-model fusion is as follows:
construct a new learning deviceTo learn the output of each sub-learner,is a Catboost model whose training set is in step (2)Matrix, the verification set isMatrix, trainedThe final model is used for predicting a new sample; when a new sample is predicted, the new sample needs to be processed in the steps (1) and (2) as well, and finallyThe received input is the initial prediction of the single model training output in the step (2), and then the probability prediction of the hemorrhage after thrombolysis can be output.
2. The method for predicting the post-thrombolytic stroke bleeding probability based on ensemble learning of claim 1, wherein in the step (1), the method for inputting the feature processing comprises the following steps:
(1-1) for conventional characteristics, the conventional characteristics can be directly input into a model without any processing;
(1-2) directly eliminating useless features so as to prevent the useless features from polluting a trained model;
(1-3) merging the relevant features;
(1-4) continuous characteristic standardization, namely compressing numerical type characteristics to the same range;
(1-5) discrete feature one-hot coding.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110525660.1A CN113345581B (en) | 2021-05-14 | 2021-05-14 | Cerebral apoplexy post thrombolysis bleeding probability prediction method based on ensemble learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110525660.1A CN113345581B (en) | 2021-05-14 | 2021-05-14 | Cerebral apoplexy post thrombolysis bleeding probability prediction method based on ensemble learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113345581A true CN113345581A (en) | 2021-09-03 |
CN113345581B CN113345581B (en) | 2023-06-27 |
Family
ID=77469741
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110525660.1A Active CN113345581B (en) | 2021-05-14 | 2021-05-14 | Cerebral apoplexy post thrombolysis bleeding probability prediction method based on ensemble learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113345581B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113951845A (en) * | 2021-12-01 | 2022-01-21 | 中国人民解放军总医院第一医学中心 | Method and system for predicting severe blood loss and injury condition of wound |
CN115064255A (en) * | 2022-06-27 | 2022-09-16 | 上海梅斯医药科技有限公司 | Medical expense prediction method, system, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140045198A1 (en) * | 2011-02-22 | 2014-02-13 | Institut De Recerca Hospital Universitari Vall D'hebron | Method of predicting the evolution of a patient suffering of stroke |
CN110033860A (en) * | 2019-02-27 | 2019-07-19 | 杭州贝安云科技有限公司 | A kind of Inherited Metabolic Disorders recall rate method for improving based on machine learning |
CN110472778A (en) * | 2019-07-29 | 2019-11-19 | 上海电力大学 | A kind of short-term load forecasting method based on Blending integrated study |
CN111199343A (en) * | 2019-12-24 | 2020-05-26 | 上海大学 | Multi-model fusion tobacco market supervision abnormal data mining method |
CN111968741A (en) * | 2020-07-15 | 2020-11-20 | 华南理工大学 | Diabetes complication high-risk early warning system based on deep learning and integrated learning |
CN112700325A (en) * | 2021-01-08 | 2021-04-23 | 北京工业大学 | Method for predicting online credit return customers based on Stacking ensemble learning |
US20210125207A1 (en) * | 2019-10-29 | 2021-04-29 | Somnath Banerjee | Multi-layered market forecast framework for hotel revenue management by continuously learning market dynamics |
-
2021
- 2021-05-14 CN CN202110525660.1A patent/CN113345581B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140045198A1 (en) * | 2011-02-22 | 2014-02-13 | Institut De Recerca Hospital Universitari Vall D'hebron | Method of predicting the evolution of a patient suffering of stroke |
CN110033860A (en) * | 2019-02-27 | 2019-07-19 | 杭州贝安云科技有限公司 | A kind of Inherited Metabolic Disorders recall rate method for improving based on machine learning |
CN110472778A (en) * | 2019-07-29 | 2019-11-19 | 上海电力大学 | A kind of short-term load forecasting method based on Blending integrated study |
US20210125207A1 (en) * | 2019-10-29 | 2021-04-29 | Somnath Banerjee | Multi-layered market forecast framework for hotel revenue management by continuously learning market dynamics |
CN111199343A (en) * | 2019-12-24 | 2020-05-26 | 上海大学 | Multi-model fusion tobacco market supervision abnormal data mining method |
CN111968741A (en) * | 2020-07-15 | 2020-11-20 | 华南理工大学 | Diabetes complication high-risk early warning system based on deep learning and integrated learning |
CN112700325A (en) * | 2021-01-08 | 2021-04-23 | 北京工业大学 | Method for predicting online credit return customers based on Stacking ensemble learning |
Non-Patent Citations (4)
Title |
---|
HUANHUAN ZHAO等: "Predict Onset Age of Hypertension Using Catboost and Medical Big Data", 《2020 INTERNATIONAL CONFERENCE ON NETWORKING AND NETWORK APPLICATIONS》 * |
安永利: "基于多维特征和模型融合的血尿酸预测模型", 中国优秀硕士论文 信息科技 * |
暨思铭: "基于多源特征分析的冠心病预测模型研究", 中国优秀硕士论文 信息科技 * |
王孟等: "基于机器学习算法的脑出血相关肺炎预测模型研究", 中国卒中杂志 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113951845A (en) * | 2021-12-01 | 2022-01-21 | 中国人民解放军总医院第一医学中心 | Method and system for predicting severe blood loss and injury condition of wound |
CN113951845B (en) * | 2021-12-01 | 2022-08-05 | 中国人民解放军总医院第一医学中心 | Method and system for predicting severe blood loss and injury condition of wound |
CN115064255A (en) * | 2022-06-27 | 2022-09-16 | 上海梅斯医药科技有限公司 | Medical expense prediction method, system, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113345581B (en) | 2023-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11328125B2 (en) | Method and server for text classification using multi-task learning | |
Yüzkat et al. | Multi-model CNN fusion for sperm morphology analysis | |
CN113345581A (en) | Integrated learning-based cerebral apoplexy thrombolysis post-hemorrhage probability prediction method | |
CN110837570B (en) | Method for unbiased classification of image data | |
Cheng et al. | Classification of long sequential data using circular dilated convolutional neural networks | |
US11093800B2 (en) | Method and device for identifying object and computer readable storage medium | |
CN116844179A (en) | Emotion analysis method based on multi-mode cross attention mechanism image-text fusion | |
CN113487614A (en) | Training method and device for fetus ultrasonic standard section image recognition network model | |
CN112052874A (en) | Physiological data classification method and system based on generation countermeasure network | |
Togunwa et al. | Deep hybrid model for maternal health risk classification in pregnancy: synergy of ANN and random forest | |
Waleffe et al. | Principal component networks: Parameter reduction early in training | |
US11494613B2 (en) | Fusing output of artificial intelligence networks | |
Angayarkanni | Predictive analytics of chronic kidney disease using machine learning algorithm | |
Mahbub et al. | Machine learning approaches to identify significant features for the diagnosis and prognosis of chronic kidney disease | |
Çakmak | Grapevine Leaves Classification Using Transfer Learning and Fine Tuning | |
CN113488165A (en) | Text matching method, device and equipment based on knowledge graph and storage medium | |
Yousefzadeh et al. | Refining the structure of neural networks using matrix conditioning | |
MUNSARIF et al. | An improved convolutional neural networks based on variation types of optimizers for handwritten digit recognition | |
Johnson | An evaluation of deep learning with class imbalanced big data | |
Rekha et al. | Enhancing Apple Leaf Diagnosis Through Deep Learning Techniques | |
CN117520473B (en) | Method and system for constructing perinatal medical research database | |
Rajathi et al. | Earlier Detection of Diabetes Using Random Forest Algorithm | |
Muthulakshmi et al. | Prediction of Heart Disease using Ensemble Learning | |
Ramazan | Classification of Historical Anatolian Coins with Machine Learning Algorithms | |
Vargas et al. | Generalised triangular distributions for ordinal deep learning: Novel proposal and optimisation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |