CN109887606A - A kind of diagnosis prediction method of the forward-backward recutrnce neural network based on attention - Google Patents

A kind of diagnosis prediction method of the forward-backward recutrnce neural network based on attention Download PDF

Info

Publication number
CN109887606A
CN109887606A CN201910149279.2A CN201910149279A CN109887606A CN 109887606 A CN109887606 A CN 109887606A CN 201910149279 A CN201910149279 A CN 201910149279A CN 109887606 A CN109887606 A CN 109887606A
Authority
CN
China
Prior art keywords
attention
vector
hidden state
medical
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910149279.2A
Other languages
Chinese (zh)
Other versions
CN109887606B (en
Inventor
莫毓昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910149279.2A priority Critical patent/CN109887606B/en
Publication of CN109887606A publication Critical patent/CN109887606A/en
Application granted granted Critical
Publication of CN109887606B publication Critical patent/CN109887606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Medical Treatment And Welfare Office Work (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The diagnosis prediction method for the forward-backward recutrnce neural network based on attention that the invention discloses a kind of, is related to predictive diagnosis technical field.The Medical coding of higher-dimension (i.e. clinical variable) is embedded in low code layer space first by this method, and then coded representation is input in the forward-backward recutrnce neural network based on attention, and generating hidden state indicates.By the hiding expression of softmax layers of input, to predict the medical code of the following access.Experimental data shows using method provided in this embodiment, and when predicting the following access information, attention mechanism can distribute different weights to access before, can not only effectively complete diagnosis prediction task, and being capable of reasonably interpretation prediction result.

Description

A kind of diagnosis prediction method of the forward-backward recutrnce neural network based on attention
Technical field
The present invention relates to predictive diagnosis technical field more particularly to a kind of forward-backward recutrnce neural networks based on attention Diagnosis prediction method.
Background technique
Electric health record (Electronic Health Records, EHR) is made of longitudinal patient health data, Patient's EHR data include medical sequence over time, wherein it is medical comprising multiple medical codes every time, including diagnose, use Medicine and program code, the several predictions modeling task being applied successfully in health care.EHR data are faced by one group of higher-dimension Bed variable (i.e., medical standard).One of key task is the diagnosis following according to the prediction of the past EHR data of patient, i.e., Diagnosis prediction.
In predictive diagnosis, the consultation time of each patient and medical treatment coding medical every time may have different important Property.Therefore, problem most important, most challenging in diagnosis prediction is: how 1. correctly model two EHR of these times and high order Data, to significantly improve estimated performance;2. how in prediction result reasonable dismissal is medical and the importance of medical standard.
Diagnosis prediction is challenging and significance a job, and the Accurate Prediction of prediction result is medical prediction The difficult point and critical issue of model.Existing much diagnosis prediction work all use depth learning technology, such as recurrent neural net The EHR data of network (RNNs) Lai Jianmo time and higher-dimension.However, the method based on RNN may not be able to remember pervious institute completely There is access information, so as to cause the prediction of mistake.
Summary of the invention
The diagnosis prediction method for the forward-backward recutrnce neural network based on attention that the purpose of the present invention is to provide a kind of, from And solve foregoing problems existing in the prior art.
To achieve the goals above, The technical solution adopted by the invention is as follows:
A kind of diagnosis prediction method of the forward-backward recutrnce neural network based on attention, includes the following steps:
S1 constructs the medical code x in Patients ' Electronic health case historytAs diagnosis, intervene coding, admission type and time Elapse the model of sequence;The access information that t is arrived according to the time 1, by the medical treatment coding x of i-th accessi∈{0,1}|C|It is embedded into Vector indicates viIn:
vi=ReLU (Wvxi+bc)
Wherein, | C | it is the quantity of unique medical code, m is insertion dimension size, Wv∈ m × | C | it is the power of medical code Weight, bc∈ m is bias vector.ReLU is defined as the rectification linear unit of ReLU (v)=max (v, 0), and max () is suitable by element Sequence is applied to vector;
S2 constructs different forward-backward recutrnce neural networks according to different attention mechanism states:
By vector viIt inputs forward-backward recutrnce neural network (BRNN), exports hidden state hiAs i-th access expression, Hidden state set is denoted as
Wherein,To be preceding to hidden state,To be rear to hidden state;
S3, according to relative importance αtWithCalculate context state vector ct, it is as follows:
α is obtained by following softmax functiont:
αt=Softmax ([αt1t2,…,αt(t-1)]).
Wherein, relative importance α is calculated by attention mechanismtIn αti
S4, based on context state vector ctWith current hidden state ht, come from using a simple articulamentum to combine The information of two vectors, so that an attention hidden state vector is generated, it is as follows:
Wherein, Wc∈ r × 4p is weight matrix;
S5, by attention hidden state vectorThe t+1 times access information is generated by softmax layers of feeding, is defined as:
Wherein,It is the parameter to be learnt,It is unique classification number.
Preferably, described that relative importance α is calculated by attention mechanism in S2tIn αti, it specifically includes:
Location-based attention function, according to current hidden state hiWeight is individually calculated, as follows:
Wherein, Wα∈ 2p,It is the parameter to be learnt;
Or, general attention function, uses a matrix Wα∈ 2p × 2p captures htAnd hiBetween relationship, calculate weight It is as follows:
Or, the function based on connection connects current hidden state h using a multilayer perceptron MLP firstsWith it is previous State hi, then by multiplied by weight matrix Wα∈ q × 4q obtains potential vector, and q is potential dimension, select tanh as It is as follows to notice that weight vectors generate for activation primitive:
Wherein, υα∈ q is the parameter to be learnt.
It preferably, further include that step explains prediction after S5, specifically:
Use nonnegative matrixMedical code is indicated, then, to each dimension of attention hidden state vector Degree is according to value reversed, and the last maximum preceding k code of selected value obtains the clinical interpretation of each dimension, as follows:
Wherein,It indicatesI-th column or dimension.
The beneficial effects of the present invention are: the diagnosis prediction of the forward-backward recutrnce neural network provided by the invention based on attention The Medical coding of higher-dimension (i.e. clinical variable) is embedded in low code layer space first, coded representation is then input to one by method In forward-backward recutrnce neural network based on attention, generating hidden state is indicated.The expression hidden by softmax layers of input, To predict the medical code of the following access.Experimental data shows using method provided in this embodiment, in the following access letter of prediction When breath, attention mechanism can distribute different weights to access before, can not only effectively complete diagnosis prediction task, and And it being capable of reasonably interpretation prediction result.
Detailed description of the invention
Fig. 1 is the diagnosis prediction method flow signal of the forward-backward recutrnce neural network provided by the invention based on attention Figure;
Fig. 2 is to analyze result schematic diagram to the attention mechanism of five patients in specific embodiment.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing, to the present invention into Row is further described.It should be appreciated that the specific embodiments described herein are only used to explain the present invention, it is not used to Limit the present invention.
As shown in Figure 1, the diagnosis prediction method of the present invention provides a kind of forward-backward recutrnce neural network based on attention, Include the following steps:
Step 1) constructs the medical code x in Patients ' Electronic health case historytAs diagnosis, intervene coding, admission type and The model of the sequence of time passage;
Step 2) constructs different forward-backward recutrnce neural networks according to different attention mechanism states;
Step 3) carries out diagnosis prediction;
Step 4) explains prediction.
It is explained as follows about above-mentioned steps are detailed:
Electric health record (Electronic Health Records, EHR) is made of longitudinal patient health data, Patient's EHR data include medical sequence over time, wherein it is medical comprising multiple medical codes every time, including diagnose, use Medicine and program code.C will be denoted as from all unique medical codes in EHR data1,c2,…,c|C|∈ C, wherein | C | it is unique The quantity of medical code.Assuming that there is N patients, N patients have T in EHR data(n)Secondary access record.Patient can be by a system Column accessTo indicate.Each access Vi, include a medical sub-set of codesWith binary system to Measure xt∈{0,1}|C|It indicates, wherein if VtInclude code ci, then i-th of element is 1.Each access VtThere is corresponding coarseness Classification indicateWhereinIt is unique classification number.Each diagnostic code may map to international disease point One node of class (ICD-9), each Procedure Codes may map to a node in active procedure term.Due to inputting xt It is excessively sparse and there is higher-dimension, therefore naturally enough to learn its low-dimensional and significant insertion.
The access information of t is arrived according to the time 1, the medical treatment that i-th can be accessed encodes xi∈{0,1}|C|It is embedded into vector Indicate viIn:
vi=ReLU (Wvxi+bc)
Wherein, m is insertion dimension size, Wv∈ m × | C | it is the weight of medical code, bc∈ m is bias vector.ReLU is It is defined as the rectification linear unit of ReLU (v)=max (v, 0), wherein max () is applied to vector by order of elements.
By vector viIt inputs forward-backward recutrnce neural network (BRNN), exports hidden state hiAs i-th access expression, Hidden state set is denoted asAccording to relative importance αtWithCalculate context state ct, it is as follows:
An attention weight vectors α is obtained by following softmax functiont:
One BRNN includes a forward direction and backward recurrent neural network (RNN).It is positiveFrom x1To xTIt reads defeated Enter access sequence, and calculates the sequence of positive hidden state(It is the dimension of hidden state with p). ReverselyRead access sequence in reverse order, that is, from xTTo x1, generate a series of states hidden backwardIt is preceding to hidden state by connectingWith backward hidden stateEnd layer vector can be obtained It indicates
It can be used to calculate relative importance α there are three types of attention mechanismtIn αti, capture relevant information:
Location-based attention function is according to current hidden state hiWeight is individually calculated, as follows:
Wherein Wα∈ 2p andIt is the parameter to be learnt.
General attention function is using a matrix Wα∈ 2p × 2p captures htAnd hiBetween relationship, calculate weight:
Another kind calculates αtiMethod be the function based on connection, use a multilayer perceptron (MLP).Connection is worked as first Preceding hidden state hsWith original state hi, then by multiplied by weight matrix Wα∈ q × 4q can obtain potential vector, and q is latent Dimension.Select tanh as activation primitive.It is as follows to notice that weight vectors generate:
Wherein υα∈ q is the parameter for needing to learn.
According to the below vector c providedtWith current hidden state ht, combined using a simple articulamentum from two The information of a vector, so that an attention hidden state is generated, it is as follows:
Wherein Wc∈ r × 4p is weight matrix.Pay attention to force vectorThe t+1 times access is generated by softmax layers of feeding Information, is defined as:
WhereinIt is that the parameter to be learnt uses true access information ytIt is accessed with prediction Cross entropy calculate the loss of all patients, it is as follows
In health care, the interpretation indicated the medical code and access acquired is extremely important, it is to be understood that The clinical meaning of each dimension of medical coded representation, and it is most important to predicting to analyze which access.By the mould proposed Type be based on attention mechanism, therefore pass through to pay attention to score analysis, it is easy to discovery every time access for the important of prediction Property.The t times prediction, if concern score αtiIt is very big, then the probabilistic forecasting of i+1 access related information is currently high.First Use nonnegative matrixTo indicate medical coding.Then, according to value to each dimension of attention hidden state vector It is reversed.The maximum preceding k coding of last selected value is as follows:
WhereinIt indicatesI-th column or dimension.By analyzing the medicine code selected, can obtain every The clinical interpretation of a dimension.
Specific embodiment
In order to illustrate technical effect of the invention, implementation verifying is carried out to the present invention using specific application example.
Experiment uses two kinds of data sets: medical subsidy claim and diabetes claim.Medical subsidy data set includes 2011 Medical subsidy application.It contains 147,810 patients and 1,055,011 medical related datas.Patient assessment is by week It is grouped, excludes the patient that physician office visits are less than 2 times.Diabetes data collection includes 2012 and 2013 medical subsidy Shens Please, corresponding is the patient (the medical subsidy member for having ICD-9 diagnostic code 250.xx in application) for being diagnosed as diabetes. It contains the related data of 22,820 patients, the physician office visits of these patients are 466,732 times.Patient assessment presses Zhou Hui Always, the patient that physician office visits are less than 5 times is excluded.
For each data set, data set is randomly divided by training, verifying and test with the ratio of 0.75:0.1:0.15 Collection.Validation data set is used to determine the optimum value of parameter, executes 100 iteration, and report the optimum performance of every kind of method.
Experiment one:
Statistical data collection is as shown in table 1:
Table 1
Experiment two:
Execute following baseline model: (1) Med2Vec (model 1);(2) RETAIN (model 2);(3) RNN (model 3).
It executes the prediction model below based on RNN: (1) calculating the RNN of relative importance with location-based attention functionl (model 4);(2) RNN of relative importance is calculated with general attention functiong(model 5);(3) with the attention function based on connection Calculate the RNN of relative importancec(model 6).
It executes following Dipole model: (1) not using the Dipole of any attention mechanism-(model 7);(2) with based on position That sets notices that function calculates the Dipole of relative importancel(model 8);(2) relative importance is calculated with general attention function Dipoleg(model 9);(3) notice that function calculates the Dipole of relative importance with based on connectionc(model 10)
Experimental result and analysis
Table 2 shows all methods in the accuracy rate of diabetes data collection
Table 2
In table 2 it is observed that Med2Vec (model 2) can be with since most of medical codes are all about diabetes The vector ratio correctly learnt on diabetes data set indicates.Therefore, Med2Vec achieves best knot in three baselines Fruit, for medical data collection, the accuracy of RETAIN (model 3) is better than Med2Vec.The reason is that medical subsidy data are concentrated with very More diseases, and the kind analogy diabetes data concentration of medical treatment coding is more.In this case, attention mechanism can help RETAIN Learn reasonable parameter, makes correct prediction.
In all methods of both data sets, the precision of RNN (model 1) is all minimum.This is because RNN's is pre- Survey depends on nearest access information.It cannot remember all past information.However, the RETAIN and RNN of proposal becomes Body RNNl(model 4), RNNg(model 5) and RNNc(model 6) can fully consider pervious all access informations, be past Access distributes different attention scores, and obtains better performance compared with RNN.
Due to diabetes data concentrate it is medical mostly related with diabetes, so being held very much according to previous diagnosis information The easily next time medical medical treatment coding of prediction.RETAIN predicted using reversed time attention mechanism, and uses time sequencing The method of attention mechanism is compared, and estimated performance can be reduced.The performance of three kinds of RNN variants is superior to RETAIN.However, RETAIN Accuracy ratio RNN variant is high, because the data in medical subsidy data set are about various disease.It is infused using reversed time Meaning mechanism can help to learn correct access relation.
The RNN and Dipole- of proposition (model 7) does not use any attention mechanism, but in diabetes and medical subsidy number According on collection, the accuracy of Dipole- is above RNN.The result shows that carrying out modeling to access information from both direction can be improved Estimated performance.Therefore, it is reasonable for carrying out diagnosis prediction using forward-backward recutrnce neural network.
The Dipole of propositionc(model 10) and Dipolel(model 8) takes in diabetes and medical subsidy data set respectively Best performance was obtained, this explanation is modeled from both direction to medical, and the weight different for medical distribution every time, all may be used To improve the accuracy of medical diagnosis prediction task.On diabetes data collection, DipolelAnd DipolecBetter than all baselines and It is recommended that RNN variant.On medical subsidy data set, Dipolel、DipolegIt is superior to the performance of tri- kinds of methods of Dipolec Baseline and RNN variant
Fig. 2 shows that a case study predicts that medical code accesses (y the 6th5) the diabetes data collection based on front Access.According to hidden state h1,h2,h3Calculate second to the 5th time attention weight based on connection accessed.In Fig. 2, x Axis is patient, and y-axis is attention weight medical every time.In this case study, test has selected 5 patients.It can observe It arrives, for different patients, the attention score learnt by attention mechanism is different.For second patient in Fig. 2, All diagnostic codes are listed in table 3:
Table 3
Weight α=[0.2386,0.0824,0.2386,0.0824] of patient 2 four times access is obtained from Fig. 2 first.It is logical Cross the analysis to this attention vector, it can be deduced that conclusion, second, the 4th time and the 5th time it is medical when medical treatment coding to most Whole prediction has a significant impact.From table 3 it can be seen that there is essential hypertension when second and the 4th time are medical in patient, Diabetes are diagnosed as when medical 5th time.Therefore, the medical treatment of the 6th medical diabetes and essential hypertension related disease is compiled The probability of code is higher.
In conclusion Dipole can make up modeling EHR data and explain the challenge of prediction result.Utilize forward-backward recutrnce mind Through network, Dipole can remember the hiding knowledge acquired in the previous and following access.Three kinds of attention mechanisms can be reasonably Interpretation prediction result.The results show on the true EHR data set of the two large sizes Dipole is in diagnosis prediction task In validity.Analysis shows attention mechanism can distribute different power to access before when predicting the following access information Weight.
By using above-mentioned technical proposal disclosed by the invention, following beneficial effect has been obtained: base provided by the invention In the diagnosis prediction method of the forward-backward recutrnce neural network of attention, the Medical coding of higher-dimension (i.e. clinical variable) is embedded in first Then coded representation is input in the forward-backward recutrnce neural network based on attention by low code layer space, generate and hide shape State indicates.By the hiding expression of softmax layers of input, to predict the medical code of the following access.Experimental data shows to use Method provided in this embodiment, when predicting the following access information, attention mechanism can distribute different power to access before Weight, can not only effectively complete diagnosis prediction task, and being capable of reasonably interpretation prediction result.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered Depending on protection scope of the present invention.

Claims (3)

1. a kind of diagnosis prediction method of the forward-backward recutrnce neural network based on attention, which comprises the steps of:
S1 constructs the medical code x in Patients ' Electronic health case historytAs diagnosis, intervene coding, admission type and time passage The model of sequence;The access information that t is arrived according to the time 1, by the medical treatment coding x of i-th accessi∈{0,1}|C|It is embedded into vector Indicate viIn:
vi=ReLU (Wvxi+bc)
Wherein, | C | it is the quantity of unique medical code, m is insertion dimension size, Wvm×|C|It is the weight of medical code, bcm It is bias vector.ReLU is defined as the rectification linear unit of ReLU (v)=max (v, 0), and max () is applied to by order of elements Vector;
S2 constructs different forward-backward recutrnce neural networks according to different attention mechanism states:
By vector viIt inputs forward-backward recutrnce neural network (BRNN), exports hidden state hiAs the expression of i-th access, hide State set is denoted as
Wherein,To be preceding to hidden state,To be rear to hidden state;
S3, according to relative importance αtWithCalculate context state vector ct, it is as follows:
α is obtained by following softmax functiont:
αt=Softmax ([αt1t2,…,αt(t-1)]).
Wherein, relative importance α is calculated by attention mechanismtIn αti
S4, based on context state vector ctWith current hidden state ht, combined using a simple articulamentum from two The information of vector, so that an attention hidden state vector is generated, it is as follows:
Wherein, Wcr×4pIt is weight matrix;
S5, by attention hidden state vectorThe t+1 times access information is generated by softmax layers of feeding, is defined as:
Wherein,It is the parameter to be learnt,It is unique classification number.
2. the diagnosis prediction method of the forward-backward recutrnce neural network according to claim 1 based on attention, feature exist In described to calculate relative importance α by attention mechanism in S2tIn αti, it specifically includes:
Location-based attention function, according to current hidden state hiWeight is individually calculated, as follows:
Wherein, Wα2p,It is the parameter to be learnt;
Or, general attention function, uses a matrix Wα2p×2pTo capture htAnd hiBetween relationship, calculate the following institute of weight Show:
Or, the function based on connection connects current hidden state h using a multilayer perceptron MLP firstsAnd original state hi, then by multiplied by weight matrix Wαq×4qPotential vector is obtained, q is potential dimension, selects tanh as activation letter It is as follows to notice that weight vectors generate for number:
Wherein, υαqIt is the parameter to be learnt.
3. the diagnosis prediction method of the forward-backward recutrnce neural network according to claim 1 based on attention, feature exist In, it further include that step explains prediction after S5, specifically:
Use nonnegative matrixIndicate then medical code is pressed each dimension of attention hidden state vector Value is reversed, and the last maximum preceding k code of selected value obtains the clinical interpretation of each dimension, as follows:
Wherein,It indicatesI-th column or dimension.
CN201910149279.2A 2019-02-28 2019-02-28 Attention-based diagnosis and prediction method for bidirectional recurrent neural network Active CN109887606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910149279.2A CN109887606B (en) 2019-02-28 2019-02-28 Attention-based diagnosis and prediction method for bidirectional recurrent neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910149279.2A CN109887606B (en) 2019-02-28 2019-02-28 Attention-based diagnosis and prediction method for bidirectional recurrent neural network

Publications (2)

Publication Number Publication Date
CN109887606A true CN109887606A (en) 2019-06-14
CN109887606B CN109887606B (en) 2022-10-18

Family

ID=66929852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910149279.2A Active CN109887606B (en) 2019-02-28 2019-02-28 Attention-based diagnosis and prediction method for bidirectional recurrent neural network

Country Status (1)

Country Link
CN (1) CN109887606B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667339A (en) * 2020-05-26 2020-09-15 吉林大学 Defamation malicious user detection method based on improved recurrent neural network
CN111965116A (en) * 2020-07-21 2020-11-20 天津大学 Hyperspectrum-based airport gas detection system and method
CN112562849A (en) * 2020-12-08 2021-03-26 中国科学技术大学 Clinical automatic diagnosis method and system based on hierarchical structure and co-occurrence structure

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778014A (en) * 2016-12-29 2017-05-31 浙江大学 A kind of risk Forecasting Methodology based on Recognition with Recurrent Neural Network
US20170316313A1 (en) * 2015-07-27 2017-11-02 Google Inc. Analyzing health events using recurrent neural networks
CN108376558A (en) * 2018-01-24 2018-08-07 复旦大学 A kind of multi-modal nuclear magnetic resonance image Case report no automatic generation method
CN109165667A (en) * 2018-07-06 2019-01-08 中国科学院自动化研究所 Based on the cerebral disease categorizing system from attention mechanism

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170316313A1 (en) * 2015-07-27 2017-11-02 Google Inc. Analyzing health events using recurrent neural networks
CN106778014A (en) * 2016-12-29 2017-05-31 浙江大学 A kind of risk Forecasting Methodology based on Recognition with Recurrent Neural Network
CN108376558A (en) * 2018-01-24 2018-08-07 复旦大学 A kind of multi-modal nuclear magnetic resonance image Case report no automatic generation method
CN109165667A (en) * 2018-07-06 2019-01-08 中国科学院自动化研究所 Based on the cerebral disease categorizing system from attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吕鸿蒙等: "基于增强AlexNet的深度学习的阿尔茨海默病的早期诊断", 《计算机科学》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667339A (en) * 2020-05-26 2020-09-15 吉林大学 Defamation malicious user detection method based on improved recurrent neural network
CN111667339B (en) * 2020-05-26 2022-05-27 吉林大学 Defamation malicious user detection method based on improved recurrent neural network
CN111965116A (en) * 2020-07-21 2020-11-20 天津大学 Hyperspectrum-based airport gas detection system and method
CN112562849A (en) * 2020-12-08 2021-03-26 中国科学技术大学 Clinical automatic diagnosis method and system based on hierarchical structure and co-occurrence structure
CN112562849B (en) * 2020-12-08 2023-11-17 中国科学技术大学 Clinical automatic diagnosis method and system based on hierarchical structure and co-occurrence structure

Also Published As

Publication number Publication date
CN109887606B (en) 2022-10-18

Similar Documents

Publication Publication Date Title
US11488718B2 (en) Computer aided medical method and medical system for medical prediction
JP7305656B2 (en) Systems and methods for modeling probability distributions
Bhatla et al. A novel approach for heart disease diagnosis using data mining and fuzzy logic
CN109785928A (en) Diagnosis and treatment proposal recommending method, device and storage medium
Wang et al. Medical prognosis based on patient similarity and expert feedback
Meder et al. Information search with situation-specific reward functions
CN109887606A (en) A kind of diagnosis prediction method of the forward-backward recutrnce neural network based on attention
Dabowsa et al. A hybrid intelligent system for skin disease diagnosis
CN112233810B (en) Treatment scheme comprehensive curative effect evaluation method based on real world clinical data
CN111798954A (en) Drug combination recommendation method based on time attention mechanism and graph convolution network
TWI778289B (en) Control method and medical system
KR20190086345A (en) Time series data processing device, health predicting system including the same, and method for operating time series data processing device
Kaul et al. Comparitive study on healthcare prediction systems using big data
WO2022005626A1 (en) Partially-observed sequential variational auto encoder
Li et al. Integrating static and time-series data in deep recurrent models for oncology early warning systems
CN117034142B (en) Unbalanced medical data missing value filling method and system
US20190221294A1 (en) Time series data processing device, health prediction system including the same, and method for operating the time series data processing device
CN116070693B (en) Patient information and medical service relation detection model training and detection method and device
Nissimagoudar et al. AlertNet: Deep convolutional-recurrent neural network model for driving alertness detection
CN113436743B (en) Representation learning-based multi-outcome efficacy prediction method, device and storage medium
CN112329921B (en) Diuretic dose reasoning equipment based on deep characterization learning and reinforcement learning
Wang et al. Application of physical examination data on health analysis and intelligent diagnosis
Boloka et al. Anomaly detection monitoring system for healthcare
Caruana Case-based explanation for artificial neural nets
CN112669973B (en) Disease collaborative progressive prediction method based on big data deep learning and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant