CN113724824B - Chronic patient follow-up method, device, computer equipment and readable storage medium - Google Patents

Chronic patient follow-up method, device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN113724824B
CN113724824B CN202111017310.0A CN202111017310A CN113724824B CN 113724824 B CN113724824 B CN 113724824B CN 202111017310 A CN202111017310 A CN 202111017310A CN 113724824 B CN113724824 B CN 113724824B
Authority
CN
China
Prior art keywords
patient
follow
visited
intervention
period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111017310.0A
Other languages
Chinese (zh)
Other versions
CN113724824A (en
Inventor
廖希洋
马凯宁
欧秋雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111017310.0A priority Critical patent/CN113724824B/en
Publication of CN113724824A publication Critical patent/CN113724824A/en
Application granted granted Critical
Publication of CN113724824B publication Critical patent/CN113724824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The application belongs to the technical field of artificial intelligence, and provides a chronic disease patient follow-up method, a device, computer equipment and a readable storage medium, wherein the method comprises the following steps: acquiring characteristic data of each chronic patient; analyzing the characteristic data of each chronic patient by using a first reinforcement learning model, determining the intervention form of each chronic patient, and taking a plurality of chronic patients with the intervention form manually intervention as patients to be visited to form a patient set to be visited; acquiring characteristic data and follow-up preference time periods of each patient to be followed, calculating intervention weight of each patient to be followed, and acquiring a plurality of idle time periods of medical staff; and analyzing the characteristic data, the follow-up preference time period and the intervention weight of each patient to be visited and a plurality of idle time periods of the medical staff by using the second reinforcement learning model, and generating the follow-up sequence of the medical staff to each patient to be visited. This application can provide convenient for medical and nursing follow-up work, promotes the follow-up effect, and applicable in wisdom medical field.

Description

Chronic patient follow-up method, device, computer equipment and readable storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a method and apparatus for follow-up access to chronic patients, a computer device, and a readable storage medium.
Background
Chronic diseases belong to life-long diseases, and chronic patients need to stay on the treatment for a long time. At present, the existing chronic disease follow-up work in China is carried out after normal medical work is completed by medical staff, and because the normal medical work load of the medical staff is huge, medical resources are limited, the daily work load is different, and further the follow-up work is carried out without fixed time, so that the follow-up work is inconvenient to develop, and the follow-up effect is poor.
Disclosure of Invention
The main aim of the application is to provide a follow-up method, a device, computer equipment and a readable storage medium for chronic patients, which aim to solve the technical problems that follow-up work of medical staff on the chronic patients is inconvenient to develop and follow-up effect is poor.
In a first aspect, the present application provides a method of follow-up for a chronically ill patient, the method comprising:
acquiring characteristic data of each chronic patient, wherein the characteristic data comprises personal characteristics and state characteristics in a previous follow-up period;
analyzing the characteristic data of each chronic patient by using a first reinforcement learning model, determining an intervention form of each chronic patient in a current follow-up period, and taking a plurality of chronic patients with the intervention form as manual intervention as patients to be followed up to form a patient set to be followed up;
Acquiring characteristic data of each patient to be visited and a follow-up preference period in a current follow-up period in the set of patients to be visited, calculating intervention weight of each patient to be visited in the current follow-up period, and acquiring a plurality of idle periods of medical staff in the current follow-up period;
and analyzing the characteristic data, the follow-up preference time period, the intervention weight and the idle time periods of the medical staff of each patient to be visited in the set of patients to be visited by using a second reinforcement learning model, and generating the follow-up sequence of the medical staff to each patient to be visited in the current follow-up period.
In a second aspect, the present application also provides a chronic patient follow-up device, the device comprising:
the first acquisition module is used for acquiring characteristic data of each chronic patient, wherein the characteristic data comprises personal characteristics and state characteristics in the previous follow-up period;
the determining module is used for analyzing the characteristic data of each chronic patient by utilizing a first reinforcement learning model, determining an intervention form of each chronic patient in the current follow-up period, and taking a plurality of chronic patients with the intervention form as manual intervention as patients to be followed up to form a patient set to be followed up;
The second acquisition module is used for acquiring the characteristic data of each patient to be visited and the follow-up preference time period in the current follow-up period in the set of patients to be visited, calculating the intervention weight of each patient to be visited in the current follow-up period, and acquiring a plurality of idle time periods of medical staff in the current follow-up period;
the generation module is used for analyzing the characteristic data, the follow-up preference time period, the intervention weight and the plurality of idle time periods of the medical staff of each patient to be visited in the set of patients to be visited by using a second reinforcement learning model, and generating the follow-up sequence of the medical staff to each patient to be visited in the current follow-up period.
In a third aspect, the present application also provides a computer device comprising a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program when executed by the processor implements the steps of a chronic patient follow-up method as described above.
In a fourth aspect, the present application also provides a computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a chronic patient follow-up method as described above.
The application discloses a chronic disease patient follow-up method, a device, a computer device and a readable storage medium, wherein the chronic disease patient follow-up method acquires characteristic data of each chronic disease patient, and the characteristic data comprises personal characteristics and state characteristics in the previous follow-up period; analyzing the characteristic data of each chronic patient by using a first reinforcement learning model, determining an intervention form of each chronic patient in a current follow-up period, and taking a plurality of chronic patients with the intervention form as manual intervention as patients to be followed up to form a patient set to be followed up; acquiring characteristic data of each patient to be visited and a follow-up preference period in a current follow-up period in the set of patients to be visited, calculating intervention weight of each patient to be visited in the current follow-up period, and acquiring an idle period of medical staff in the current follow-up period; and analyzing the characteristic data, the follow-up preference time period, the intervention weight and the idle time period of the medical staff of each patient to be followed by using a second reinforcement learning model to generate a follow-up sequence of each patient to be followed by the medical staff in the current follow-up week. The screening of the patients to be visited is realized by utilizing the first reinforcement learning model, and the patient intervention weight is creatively introduced, so that the urgency of the patients to be visited can be more effectively distinguished, the follow-up priority ordering of the patients to be visited in the idle period is realized by utilizing the second reinforcement learning model, the medical care resources can be distributed more reasonably, effectively and scientifically, convenience is provided for the follow-up work of the medical care personnel, the follow-up effect can be improved, and the follow-up work is more intelligent.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment of a method for follow-up for patients with chronic diseases according to the present application;
FIG. 2 is an exemplary view of a follow-up scenario involved in one embodiment of a chronic patient follow-up method of the present application;
FIG. 3 is a schematic block diagram of a chronic patient follow-up device according to one embodiment of the present application;
fig. 4 is a schematic block diagram of a computer device according to an embodiment of the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Embodiments of the present application provide a chronic patient follow-up method, apparatus, computer device, and readable storage medium. The chronic disease patient follow-up method is mainly applied to chronic disease patient follow-up equipment, the chronic disease patient follow-up equipment can be terminal equipment with data processing functions such as a server, and the chronic disease patient follow-up equipment is loaded with a follow-up management system.
The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flow chart of a follow-up method for chronic patients according to an embodiment of the present application.
As shown in fig. 1, the chronic patient follow-up method includes steps S101 to S104.
Step S101, feature data of each chronic patient is acquired, wherein the feature data comprise personal features and state features in the previous follow-up period.
Wherein, the follow-up management system is in communication connection with the chronic disease management system. Feature data for a chronically ill patient is obtained from a chronically ill management system, the feature data including personal features and status features during a previous follow-up period. Taking the follow-up application scenario of diabetics as an example, the personal characteristics comprise age, gender, education degree, past disease history, treatment condition and the like; the state characteristics in the last follow-up period comprise intervention forms, blood sugar recording times, whether the blood sugar recorded each time reaches the standard, insulin use conditions and the like in the last follow-up period, wherein the intervention forms comprise manual intervention and manual non-intervention.
Step S102, analyzing the characteristic data of each chronic patient by using a first reinforcement learning model, predicting the intervention form of each chronic patient in the current follow-up period, and taking a plurality of chronic patients with the intervention form manually intervention as the patients to be followed to form a patient set to be followed.
The first reinforcement learning model is a DQN (Deep Q-learning Network) model trained from a first training sample set for predicting a form of intervention of a chronically ill patient. Wherein the DQN model incorporates a neural network and a Q-learning algorithm, and the strategy is fitted by using the neural network. In the training process, the state is input into the DQN model for training, the Q value (rewarding expectation) corresponding to each action (manual intervention/manual non-intervention) is output, and the action corresponding to the largest Q value is the action which the DQN model considers should be selected. The value of the action is called Q value, which represents the expectation that the agent of the DQN model will reward up to the final state after selecting a certain action.
Specifically, training the DQN model to obtain a first reinforcement learning model includes:
a. obtaining a first training sample set for training the DQN model, the first training sample set comprising a plurality of first training samples, each first training sample being characteristic data of a patient for training;
b. Training the DQN model to be converged according to the first training sample set to obtain a first reinforcement learning model: defining a first training sample as a state, inputting the state into the DQN model to train the DQN model, enabling an agent of the DQN model to enter another state s' from one state s by selecting an action a (manual intervention/manual non-intervention), and feeding back an agent reward to the agent, wherein the reward represents rewards obtained after the DQN model selects the action a (manual intervention/manual non-intervention) for the state of the patient at the time when the patient in the first training sample is visited, the reward is positive, and the positive represents encouragement of the agent to continue to do so in the state, and the negative represents no hope of the agent to do so. In the training process, the state, action and reward are continuously adjusted, so that the Q value output by the DQN model approaches the reward, and the DQN model converges at the moment to indicate that the training is completed, and a first reinforcement learning model is obtained.
The characteristic data of each chronic patient is analyzed by using the first reinforcement learning model, so that the intervention form of each chronic patient can be more accurately determined.
In some embodiments, the analyzing the characteristic data of each chronic patient using the first reinforcement learning model predicts a form of intervention of each chronic patient during a current follow-up period, specifically: respectively inputting the characteristic data of each chronic patient into the first reinforcement learning model for prediction, and obtaining a first rewarding expectation corresponding to manual intervention and a second rewarding expectation corresponding to no manual intervention of each chronic patient output by the first reinforcement learning model in a current follow-up period; and determining the intervention form of each chronic patient in the current follow-up period according to the first rewarding expectation and the second rewarding expectation corresponding to each chronic patient.
Wherein, the method determines the intervention form of each chronic patient in the current follow-up period according to the first rewarding expectation and the second rewarding expectation corresponding to each chronic patient, specifically: comparing the first rewarding expectation and the second rewarding expectation corresponding to each chronic patient; determining a form of intervention for the chronic patient for which the first reward is expected to be greater than the second reward in the current follow-up period as a manual intervention.
Namely, characteristic data of each chronic patient are defined as states, and the states are respectively input into the first reinforcement learning model to be predicted, so that rewarding expectations (defined as first rewarding expectations) corresponding to manual intervention and rewarding expectations (defined as second rewarding expectations) corresponding to no manual intervention of each chronic patient output by the first reinforcement learning model are obtained in a current follow-up period; and then comparing the first rewarding expectation of manual intervention of each chronic patient with the second rewarding expectation of no manual intervention, determining the manual intervention of the intervention form of the chronic patient in the current follow-up period for which the first rewarding expectation of manual intervention is larger than the second rewarding expectation of no manual intervention, and determining the manual non-intervention of the intervention form of the chronic patient in the current follow-up period for which the first rewarding expectation of manual intervention is smaller than the second rewarding expectation of no manual intervention.
Further, a plurality of chronic patients with intervention forms of manual intervention are taken as patients to be visited, and a patient set to be visited is constructed according to the plurality of patients to be visited.
Step S103, the characteristic data of each patient to be visited and the follow-up preference time period in the current follow-up period in the set of patients to be visited are obtained, the intervention weight of each patient to be visited in the current follow-up period is calculated, and a plurality of idle time periods of medical staff in the current follow-up period are obtained.
Characteristic data of each patient to be visited in the set of patients to be visited is obtained from the chronic disease management system.
And acquiring the follow-up preference periods of the patients to be visited from the patient terminals to be visited, such as sending an acquisition request for the follow-up preference periods to the patient terminals to be visited, so as to receive the follow-up preference periods returned by the patient terminals to be visited.
In order to more effectively distinguish the urgency of each patient to be followed, an intervention weight is creatively introduced.
In some embodiments, the calculating the intervention weight of each patient to be visited in the current follow-up period is specifically: according to the first rewarding expectation and the second rewarding expectation corresponding to each patient to be followed, combining a preset intervention weight calculation formulaAnd calculating to obtain the intervention weight of each patient to be visited in the current follow-up period, wherein W represents the intervention weight, Q represents a first rewarding expectation corresponding to manual intervention of the patient to be visited, and Q' represents a second rewarding expectation corresponding to no manual intervention of the patient to be visited.
Namely, according to a first rewarding expectation of manual intervention and a second rewarding expectation of no manual intervention of each patient to be visited, which are output by a first reinforcement learning model, the intervention weight of each patient to be visited is calculated by combining a preset intervention weight calculation formula as follows:
Wherein W represents an intervention weight;
q represents a first rewarding expectation corresponding to manual intervention of the patient to be visited;
q' represents the corresponding second rewarding expectation that the patient to be followed does not have manual intervention.
The higher the intervention weight of the patient to be visited, the higher the requirement of the patient to be visited for manual intervention by medical staff.
The medical staff can set an idle period at the medical end of the follow-up management system, and the idle period can be adjusted at any time. Thus, a plurality of idle periods of the medical staff in the current follow-up period can be obtained from the medical end.
Step S104, analyzing the characteristic data, the follow-up preference time period, the intervention weight and the idle time periods of the medical staff of each patient to be followed by using a second reinforcement learning model, and generating a target follow-up sequence of the medical staff to each patient to be followed in the current follow-up period.
The second reinforcement learning model is an DQN model subjected to sequencing learning according to the second training sample set and is used for intelligent sequencing, and the follow-up sequence of medical staff on each patient to be followed up in a plurality of idle periods can be automatically calculated. In the sequence learning process, characteristic data, follow-up preference time periods and intervention weights of each patient in the second training sample set and a plurality of idle time periods of medical staff are defined as states, follow-up of each patient in the second training sample set is defined as actions, when the medical staff in the second training sample set completes follow-up of one patient, corresponding rewards are fed back according to follow-up effects, for example, the higher the matching degree between follow-up priority of the medical staff on the patient and follow-up requirement urgency of the patient is, the higher the rewards are, and therefore the medical staff is guided to make more efficient follow-up selection. Patient follow-up demand urgency is considered not only for patient intervention weights, but also patient specific characterization data and follow-up preference periods.
Specifically, training the DQN model to obtain a second reinforcement learning model includes:
c. a second training sample set for training the DQN model is obtained, the second training sample set comprising a plurality of second training samples, each second training sample being characteristic data of the patient for training, a follow-up preference period and an intervention weight, and a plurality of idle periods for the trained healthcare worker.
d. Performing ordered learning training on the DQN model according to the second training sample set to obtain a second reinforcement learning model: feature number of each patient in the second training sample setThe follow-up preference period and intervention weight and the multiple idle periods of the healthcare worker are defined as state s= (S) 0 ,s 1 ,...s t ,s t+1 ) Follow-up of the patient set in the second training sample set is defined as action a= (a) 0 ,a 1 ,...a t ,a t+1 ) Training the DQN model such that the healthcare worker in the second training sample set is in its initial state s 0 Next, selecting one patient from the patient set of the second training sample set to complete the follow-up a 0 Obtain rewards r 0 Medical staff in the second training sample set completes the follow-up a 0 After which it goes to the next state s 1 Follow-up a for the next patient 1 Obtain rewards r 1 Until the follow-up of all patients in the patient set of the second training sample set is completed, a jackpot prize r= (R) is obtained 0 ,r 1 ,...r t ,r t+1 ) In the training process, the state S, the action A and the reward R are continuously adjusted, so that the accumulated reward expected output by the DQN model approaches the accumulated reward R, a deterministic selection strategy is learned at the moment, the DQN model converges, the training is completed, and a second reinforcement learning model is obtained. The deterministic selection strategy is determined through a neural network of the DQN model, and the goal of the deterministic selection strategy is to maximize the accumulated rewards expectation, namely, the sum of rewards expectations obtained by the agent of the DQN model in all states is maximized under the action of the deterministic selection strategy.
The characteristic data, the follow-up preference time period and the intervention weight of each patient to be visited and a plurality of idle time periods of medical staff are analyzed by using the second reinforcement learning, so that the follow-up sequence of the medical staff to each patient to be visited in the current follow-up period can be intelligently generated.
In some embodiments, the step S104 is specifically: inputting the characteristic data, the follow-up preference time period, the intervention weight and the idle time periods of medical staff of each patient to be visited in the patient set to be visited into the second reinforcement learning model, enabling the medical staff to select one patient to be visited from the patient set to be visited to finish follow-up visit in the initial idle time period through a selection strategy of the second reinforcement learning model, updating the medical staff to the next idle time period, selecting the next patient to be visited to finish follow-up visit until the medical staff finishes follow-up visit of all patients to be visited in the patient set to be visited, and obtaining a selection sequence corresponding to the maximum cumulative rewarding expectation under the selection strategy; and taking the selection sequence as a follow-up sequence of the medical staff on each patient to be followed in the current follow-up period.
Specifically, a plurality of idle periods of the healthcare worker are defined as a state S' = (S 0 ,S 1 ,...S t ,S t+1 ) Follow-up of the patient set to be followed is defined as action a' = (a) 0 ,A 1 ,...A t ,A t+1 ). Inputting the characteristic data, the follow-up preference period and the intervention weight of each patient to be followed and the state S' of the medical staff into a second reinforcement learning model, wherein the second reinforcement learning model combines the characteristic data, the follow-up preference period and the intervention weight of each patient to be followed according to a deterministic selection strategy thereof so that the medical staff is in the initial state S 0 Then, selecting one patient to be visited from all patients to be visited to finish following A 0 Obtain rewards R 0 Medical staff completes follow-up A 0 After that, enter the next state S 1 Complete follow-up A for the next patient to be followed 1 Obtain rewards R 1 Until the follow-up of all patients to be followed up is completed, a jackpot prize R' = (R) 0 ,R 1 ,...,R t ,R t+1 ) And obtaining the maximum cumulative prize expected by R' under the selection strategy, and further obtaining the selection sequence corresponding to the maximum cumulative prize expected, wherein the selection sequence is used as the follow-up sequence of the medical staff for each patient to be followed in the current follow-up period.
In some embodiments, the method further comprises step S105.
Step S105, determining key state characteristics of each patient to be visited by using an interpretable model of the first reinforcement learning model, so that medical staff pay attention to the key state characteristics of each patient to be visited when performing follow-up on each patient to be visited according to the follow-up sequence.
The interpretable model is used for measuring the interpretability of the first reinforcement learning model, namely, the interpretability of the patient condition in the medical follow-up intervention, so that the patient is reminded of the dangerous point which is focused on.
By way of example, the interpretable model may be a logistic regression model that calculates the importance of different state features on the first reinforcement learning model to select different intervention modality manifestations. The method comprises the steps of fixing part of state characteristics of each chronic patient, randomly generating residual state characteristics, substituting the data into a first reinforcement learning model to obtain an intervention form of the data, repeating the steps to construct a data set, wherein the input is the state characteristics of the patient, and the output is the intervention form. A logistic regression model is constructed on the data set, wherein the input of the logistic regression model is the state characteristics of the chronic disease patient, and the output is the probability that the state characteristics trigger manual intervention. The logistic regression model has excellent operational efficiency and interpretability.
After the construction of the interpretable model is completed, the interpretable model is utilized to determine the key state characteristics of each patient to be visited.
In some embodiments, the determining key status features of each patient to be followed using the interpretable model of the first reinforcement learning model is specifically: respectively inputting each state characteristic in the characteristic data of each patient to be visited into the interpretable model for classification, and obtaining the probability of triggering manual intervention by each state characteristic of each patient to be visited, which is output by the interpretable model; comparing the probability of triggering manual intervention by the state characteristics of each patient to be visited to obtain a corresponding comparison result of each patient to be visited; and respectively selecting key state characteristics of each patient to be visited from the state characteristics of each patient to be visited according to the corresponding comparison result of each patient to be visited.
Inputting the state characteristics of each patient to be visited into an interpretable model for classification, and obtaining the probability of triggering manual intervention by each state characteristic of each patient to be visited, which is output by the interpretable model; and then comparing the probability of triggering the manual intervention by each characteristic of each patient to be visited, so that the state characteristic corresponding to the maximum probability of triggering the manual intervention in the state characteristics of each patient to be visited is used as the key state characteristic of each patient to be visited. In this way, by means of an interpretable model, it is possible to interpret what state characteristics each patient to be followed specifically causes the triggering of the final manual intervention.
By introducing the interpretive technology of reinforcement learning, the interpretive of the state characteristics of the patient to be visited is realized, help can be provided for the conversation during rich medical care communication in the follow-up process, medical staff can be more scientifically guided to know the condition that the patient needs to pay attention to, and the communication efficiency and scientificity are improved.
To sum up, as shown in the follow-up scenario example diagram of fig. 2, by introducing two reinforcement learning models, the two reinforcement learning models are respectively used for judging whether the patient needs to perform manual intervention or not, and prioritizing the intervention patient in an idle period set by the medical staff. Meanwhile, an innovation point for strengthening the learning interpretability is introduced and is used for interpreting the condition of the patient in the follow-up intervention of medical care and reminding the patient of a dangerous point which is focused on.
According to the chronic disease patient follow-up method provided by the embodiment, the characteristic data of each chronic disease patient are obtained, wherein the characteristic data comprise personal characteristics and state characteristics in the previous follow-up period; analyzing the characteristic data of each chronic patient by using a first reinforcement learning model, determining an intervention form of each chronic patient in a current follow-up period, and taking a plurality of chronic patients with the intervention form as manual intervention as patients to be followed up to form a patient set to be followed up; acquiring characteristic data of each patient to be visited and a follow-up preference period in a current follow-up period in the set of patients to be visited, calculating intervention weight of each patient to be visited in the current follow-up period, and acquiring an idle period of medical staff in the current follow-up period; and analyzing the characteristic data, the follow-up preference time period, the intervention weight and the idle time period of the medical staff of each patient to be followed by using a second reinforcement learning model to generate a follow-up sequence of each patient to be followed by the medical staff in the current follow-up week. The screening of the patients to be visited is realized by utilizing the first reinforcement learning model, and the patient intervention weight is creatively introduced, so that the urgency of the patients to be visited can be more effectively distinguished, the follow-up priority ordering of the patients to be visited in the idle period is realized by utilizing the second reinforcement learning model, the medical care resources can be more reasonably, effectively and scientifically distributed, convenience is provided for the follow-up work of the medical care personnel, the follow-up effect can be improved, and the follow-up work is more intelligent.
Referring to fig. 3, fig. 3 is a schematic block diagram of a chronic patient follow-up device according to an embodiment of the present application.
As shown in fig. 3, the apparatus 300 includes: a first acquisition module 301, a determination module 302, a second acquisition module 303, and a generation module 304.
A first obtaining module 301, configured to obtain feature data of each chronic patient, where the feature data includes a personal feature and a status feature in a previous follow-up period;
a determining module 302, configured to analyze the feature data of each chronic patient by using a first reinforcement learning model, determine an intervention form of each chronic patient in a current follow-up period, and use a plurality of chronic patients with manual intervention in the intervention form as a patient to be followed to form a patient set to be followed;
a second obtaining module 303, configured to obtain feature data of each patient to be visited and a follow-up preference period in the current follow-up period in the set of patients to be visited, calculate an intervention weight of each patient to be visited in the current follow-up period, and obtain a plurality of idle periods of medical staff in the current follow-up period;
the generating module 304 is configured to analyze the feature data, the follow-up preference period, the intervention weight, and the multiple idle periods of the medical staff of each patient to be visited in the set of patients to be visited by using a second reinforcement learning model, and generate a follow-up sequence of the medical staff to each patient to be visited in the current follow-up period.
It should be noted that, for convenience and brevity of description, specific working procedures of the above-described apparatus and each module and unit may refer to corresponding procedures in the foregoing embodiment of the chronic patient follow-up method, and will not be described in detail herein.
The apparatus provided by the above embodiments may be implemented in the form of a computer program which may be run on a computer device as shown in fig. 4.
Referring to fig. 4, fig. 4 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device may be a personal computer (personal computer, PC), a server, or the like having a data processing function.
As shown in fig. 4, the computer device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a non-volatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions which, when executed, cause the processor to perform any one of a number of chronic patient follow-up methods.
The processor is used to provide computing and control capabilities to support the operation of the entire computer device.
The internal memory provides an environment for the execution of a computer program in a non-volatile storage medium that, when executed by a processor, causes the processor to perform any of a number of chronic patient follow-up methods.
The network interface is used for network communication such as transmitting assigned tasks and the like. Those skilled in the art will appreciate that the structures shown in FIG. 4 are block diagrams only and do not constitute a limitation of the computer device on which the present aspects apply, and that a particular computer device may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein in one embodiment the processor is configured to run a computer program stored in the memory to implement the steps of:
acquiring characteristic data of each chronic patient, wherein the characteristic data comprises personal characteristics and state characteristics in a previous follow-up period; analyzing the characteristic data of each chronic patient by using a first reinforcement learning model, determining an intervention form of each chronic patient in a current follow-up period, and taking a plurality of chronic patients with the intervention form as manual intervention as patients to be followed up to form a patient set to be followed up; acquiring characteristic data of each patient to be visited and a follow-up preference period in a current follow-up period in the set of patients to be visited, calculating intervention weight of each patient to be visited in the current follow-up period, and acquiring a plurality of idle periods of medical staff in the current follow-up period; and analyzing the characteristic data, the follow-up preference time period, the intervention weight and the idle time periods of the medical staff of each patient to be visited in the set of patients to be visited by using a second reinforcement learning model, and generating the follow-up sequence of the medical staff to each patient to be visited in the current follow-up period.
In some embodiments, the processor is configured to perform the analyzing the characteristic data of each chronic patient using a first reinforcement learning model, and when determining a form of intervention of each chronic patient during a current follow-up period, to perform:
respectively inputting the characteristic data of each chronic patient into the first reinforcement learning model for prediction, and obtaining a first rewarding expectation corresponding to manual intervention and a second rewarding expectation corresponding to no manual intervention of each chronic patient output by the first reinforcement learning model in a current follow-up period;
and determining the intervention form of each chronic patient in the current follow-up period according to the first rewarding expectation and the second rewarding expectation corresponding to each chronic patient.
In some embodiments, the processor implements the determining, based on the first rewards expectation and the second rewards expectation corresponding to each chronic patient, a form of intervention of each chronic patient in a current follow-up period for implementing:
comparing the first rewarding expectation and the second rewarding expectation corresponding to each chronic patient;
determining a form of intervention for the chronic patient for which the first reward is expected to be greater than the second reward in the current follow-up period as a manual intervention.
In some embodiments, the processor, when implementing the calculation of the intervention weights for each patient to be followed in the current follow-up period, is configured to implement:
according to the first rewarding expectation and the second rewarding expectation corresponding to each patient to be followed, combining a preset intervention weight calculation formulaAnd calculating to obtain the intervention weight of each patient to be visited in the current follow-up period, wherein W represents the intervention weight, Q represents a first rewarding expectation corresponding to manual intervention of the patient to be visited, and Q' represents a second rewarding expectation corresponding to no manual intervention of the patient to be visited.
In some embodiments, the processor is configured to analyze the characteristic data, the follow-up preference period, the intervention weight, and the plurality of idle periods of the healthcare worker for each patient to be visited in the set of patients to be visited using a second reinforcement learning model, and generate, when the healthcare worker performs a follow-up sequence for each patient to be visited in a current follow-up period, a follow-up sequence for each patient to be visited in the current follow-up period, to perform:
inputting the characteristic data, the follow-up preference time period, the intervention weight and the idle time periods of medical staff of each patient to be visited in the patient set to be visited into the second reinforcement learning model, enabling the medical staff to select one patient to be visited from the patient set to be visited to finish follow-up visit in the initial idle time period through a selection strategy of the second reinforcement learning model, updating the medical staff to the next idle time period, selecting the next patient to be visited to finish follow-up visit until the medical staff finishes follow-up visit of all patients to be visited in the patient set to be visited, and obtaining a selection sequence corresponding to the maximum cumulative rewarding expectation under the selection strategy;
And taking the selection sequence as a follow-up sequence of the medical staff on each patient to be followed in the current follow-up period.
In some embodiments, the processor is configured to run a computer program stored in the memory, and further implement the steps of:
and determining key state characteristics of each patient to be visited by using an interpretable model of the first reinforcement learning model so that the medical staff pay attention to the key state characteristics of each patient to be visited when carrying out follow-up on each patient to be visited according to the follow-up sequence.
In some embodiments, the processor is configured to, when implementing the determining key status features of each patient to be followed using the interpretable model of the first reinforcement learning model:
respectively inputting each state characteristic in the characteristic data of each patient to be visited into the interpretable model for classification, and obtaining the probability of triggering manual intervention by each state characteristic of each patient to be visited, which is output by the interpretable model;
comparing the probability of triggering manual intervention by the state characteristics of each patient to be visited to obtain a corresponding comparison result of each patient to be visited;
and respectively selecting key state characteristics of each patient to be visited from the state characteristics of each patient to be visited according to the corresponding comparison result of each patient to be visited.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored thereon, the computer program comprising program instructions that, when executed, implement a method that can be referred to various embodiments of the chronic patient follow-up method of the present application.
The computer readable storage medium may be an internal storage unit of the computer device according to the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, which are provided on the computer device.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments. While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of follow-up for a chronically ill patient, the method comprising the steps of:
Acquiring characteristic data of each chronic patient, wherein the characteristic data comprises personal characteristics and state characteristics in a previous follow-up period;
analyzing the characteristic data of each chronic patient by using a first reinforcement learning model, determining an intervention form of each chronic patient in a current follow-up period, wherein the intervention form comprises manual intervention and manual non-intervention, and taking a plurality of chronic patients with manual intervention as patients to be followed up to form a patient set to be followed up;
acquiring characteristic data of each patient to be visited and a follow-up preference period in a current follow-up period in the set of patients to be visited, calculating intervention weight of each patient to be visited in the current follow-up period, and acquiring a plurality of idle periods of medical staff in the current follow-up period;
and analyzing the characteristic data, the follow-up preference time period, the intervention weight and the idle time periods of the medical staff of each patient to be visited in the set of patients to be visited by using a second reinforcement learning model, and generating the follow-up sequence of the medical staff to each patient to be visited in the current follow-up period.
2. The method of claim 1, wherein analyzing the characteristic data of each chronic patient using a first reinforcement learning model to determine the form of intervention of each chronic patient during the current follow-up period comprises:
Respectively inputting the characteristic data of each chronic patient into the first reinforcement learning model for prediction, and obtaining a first rewarding expectation corresponding to manual intervention and a second rewarding expectation corresponding to no manual intervention of each chronic patient output by the first reinforcement learning model in a current follow-up period;
and determining the intervention form of each chronic patient in the current follow-up period according to the first rewarding expectation and the second rewarding expectation corresponding to each chronic patient.
3. The method of claim 2, wherein determining the form of intervention of each chronically ill patient during the current follow-up period based on the corresponding first rewards expectation and the second rewards expectation for each chronically ill patient comprises:
comparing the first rewarding expectation and the second rewarding expectation corresponding to each chronic patient;
determining a form of intervention for the chronic patient for which the first reward is expected to be greater than the second reward in the current follow-up period as a manual intervention.
4. The chronic patient follow-up method according to claim 2, wherein the calculating of the intervention weights of each patient to be followed over the current follow-up period comprises:
According to the first rewarding expectation and the second rewarding expectation corresponding to each patient to be followed, combining a preset intervention weight calculation formulaCalculating to obtain the intervention weight of each patient to be visited in the current follow-up period, wherein W represents the intervention weight,q represents a first rewarding expectation corresponding to manual intervention of the patient to be visited, and Q' represents a second rewarding expectation corresponding to no manual intervention of the patient to be visited.
5. The chronic disease patient follow-up method according to claim 1, wherein the analyzing the characteristic data, the follow-up preference period and the intervention weight of each patient to be followed and the plurality of idle periods of the medical staff in the set of patients to be followed using a second reinforcement learning model generates a follow-up sequence of the medical staff for each patient to be followed in a current follow-up period, comprising:
inputting the characteristic data, the follow-up preference time period, the intervention weight and the idle time periods of medical staff of each patient to be visited in the patient set to be visited into the second reinforcement learning model, enabling the medical staff to select one patient to be visited from the patient set to be visited to finish follow-up visit in the initial idle time period through a selection strategy of the second reinforcement learning model, updating the medical staff to the next idle time period, selecting the next patient to be visited to finish follow-up visit until the medical staff finishes follow-up visit of all patients to be visited in the patient set to be visited, and obtaining a selection sequence corresponding to the maximum cumulative rewarding expectation under the selection strategy;
And taking the selection sequence as a follow-up sequence of the medical staff on each patient to be followed in the current follow-up period.
6. The method of claim 1, further comprising the steps of:
and determining key state characteristics of each patient to be visited by using an interpretable model of the first reinforcement learning model so that the medical staff pay attention to the key state characteristics of each patient to be visited when carrying out follow-up on each patient to be visited according to the follow-up sequence.
7. The method of claim 6, wherein determining key status features for each patient to be followed using the interpretable model of the first reinforcement learning model comprises:
respectively inputting each state characteristic in the characteristic data of each patient to be visited into the interpretable model for classification, and obtaining the probability of triggering manual intervention by each state characteristic of each patient to be visited, which is output by the interpretable model;
comparing the probability of triggering manual intervention by the state characteristics of each patient to be visited to obtain a corresponding comparison result of each patient to be visited;
and respectively selecting key state characteristics of each patient to be visited from the state characteristics of each patient to be visited according to the corresponding comparison result of each patient to be visited.
8. A chronic patient follow-up device, the chronic patient follow-up device comprising:
the first acquisition module is used for acquiring characteristic data of each chronic patient, wherein the characteristic data comprises personal characteristics and state characteristics in the previous follow-up period;
the determining module is used for analyzing the characteristic data of each chronic patient by utilizing a first reinforcement learning model, determining an intervention form of each chronic patient in the current follow-up period, wherein the intervention form comprises manual intervention and manual non-intervention, and taking a plurality of chronic patients with the intervention form manually intervention as a patient to be followed up to form a patient set to be followed up;
the second acquisition module is used for acquiring the characteristic data of each patient to be visited and the follow-up preference time period in the current follow-up period in the set of patients to be visited, calculating the intervention weight of each patient to be visited in the current follow-up period, and acquiring a plurality of idle time periods of medical staff in the current follow-up period;
the generation module is used for analyzing the characteristic data, the follow-up preference time period, the intervention weight and the plurality of idle time periods of the medical staff of each patient to be visited in the set of patients to be visited by using a second reinforcement learning model, and generating the follow-up sequence of the medical staff to each patient to be visited in the current follow-up period.
9. A computer device comprising a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program when executed by the processor implements the steps of the chronic patient follow-up method according to any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program, wherein the computer program, when executed by a processor, implements the steps of the chronic patient follow-up method according to any of claims 1 to 7.
CN202111017310.0A 2021-08-31 2021-08-31 Chronic patient follow-up method, device, computer equipment and readable storage medium Active CN113724824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111017310.0A CN113724824B (en) 2021-08-31 2021-08-31 Chronic patient follow-up method, device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111017310.0A CN113724824B (en) 2021-08-31 2021-08-31 Chronic patient follow-up method, device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113724824A CN113724824A (en) 2021-11-30
CN113724824B true CN113724824B (en) 2024-03-08

Family

ID=78680240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111017310.0A Active CN113724824B (en) 2021-08-31 2021-08-31 Chronic patient follow-up method, device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113724824B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116779150B (en) * 2023-07-03 2023-12-22 浙江一山智慧医疗研究有限公司 Personalized medical decision method, device and application based on multi-agent interaction

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648808A (en) * 2018-03-23 2018-10-12 深圳百诺国际生命科技有限公司 Follow-up plan example generation method and device
CN109493958A (en) * 2018-10-23 2019-03-19 平安医疗健康管理股份有限公司 A kind of follow-up ways to draw up the plan, device, server and medium
CN110400613A (en) * 2019-06-10 2019-11-01 南京医基云医疗数据研究院有限公司 A kind of follow-up patient screening method, apparatus, readable medium and electronic equipment
KR20200123574A (en) * 2019-04-22 2020-10-30 서울대학교병원 Apparatus and method for symtome and disease management based on learning
CN112489747A (en) * 2020-12-14 2021-03-12 平安国际智慧城市科技股份有限公司 Chronic patient supervision method, device, equipment and medium based on analysis model
CN112906973A (en) * 2021-03-10 2021-06-04 浙江银江云计算技术有限公司 Family doctor follow-up visit path recommendation method and system
WO2021139223A1 (en) * 2020-08-06 2021-07-15 平安科技(深圳)有限公司 Method and apparatus for interpretation of clustering model, computer device, and storage medium
CN113284632A (en) * 2021-05-28 2021-08-20 平安国际智慧城市科技股份有限公司 Follow-up method and device for diabetic patients, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11620595B2 (en) * 2020-01-15 2023-04-04 Microsoft Technology Licensing, Llc Deep reinforcement learning for long term rewards in an online connection network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648808A (en) * 2018-03-23 2018-10-12 深圳百诺国际生命科技有限公司 Follow-up plan example generation method and device
CN109493958A (en) * 2018-10-23 2019-03-19 平安医疗健康管理股份有限公司 A kind of follow-up ways to draw up the plan, device, server and medium
KR20200123574A (en) * 2019-04-22 2020-10-30 서울대학교병원 Apparatus and method for symtome and disease management based on learning
CN110400613A (en) * 2019-06-10 2019-11-01 南京医基云医疗数据研究院有限公司 A kind of follow-up patient screening method, apparatus, readable medium and electronic equipment
WO2021139223A1 (en) * 2020-08-06 2021-07-15 平安科技(深圳)有限公司 Method and apparatus for interpretation of clustering model, computer device, and storage medium
CN112489747A (en) * 2020-12-14 2021-03-12 平安国际智慧城市科技股份有限公司 Chronic patient supervision method, device, equipment and medium based on analysis model
CN112906973A (en) * 2021-03-10 2021-06-04 浙江银江云计算技术有限公司 Family doctor follow-up visit path recommendation method and system
CN113284632A (en) * 2021-05-28 2021-08-20 平安国际智慧城市科技股份有限公司 Follow-up method and device for diabetic patients, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113724824A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
US8744870B2 (en) Method and system for forecasting clinical pathways and resource requirements
CN109863721B (en) Digital assistant extended automatic ranking and selection
CN108351862B (en) Method and apparatus for determining developmental progress using artificial intelligence and user input
JP6783887B2 (en) Treatment route analysis and management platform
US20190267141A1 (en) Patient readmission prediciton tool
US20190267133A1 (en) Privacy-preserving method and system for medical appointment scheduling using embeddings and multi-modal data
WO2021179630A1 (en) Complications risk prediction system, method, apparatus, and device, and medium
EP3701415A1 (en) System for supporting clinical decision-making in reproductive endocrinology and infertility
CN111144658B (en) Medical risk prediction method, device, system, storage medium and electronic equipment
CN112289442A (en) Method and device for predicting disease endpoint event and electronic equipment
CN111696661A (en) Patient clustering model construction method, patient clustering method and related equipment
US20200111575A1 (en) Producing a multidimensional space data structure to perform survival analysis
CN113724824B (en) Chronic patient follow-up method, device, computer equipment and readable storage medium
He et al. Neural network-based multi-task learning for inpatient flow classification and length of stay prediction
CA2997354A1 (en) Experience engine-method and apparatus of learning from similar patients
EP3718116B1 (en) Apparatus for patient data availability analysis
CN114822741A (en) Processing device, computer equipment and storage medium of patient classification model
US20210342691A1 (en) System and method for neural time series preprocessing
CN111967581B (en) Method, device, computer equipment and storage medium for interpreting grouping model
CN113693561A (en) Parkinson disease prediction device and device based on neural network and storage medium
CN113782163A (en) Information pushing method and device and computer readable storage medium
US20190108313A1 (en) Analytics at the point of care
US20200402658A1 (en) User-aware explanation selection for machine learning systems
CN113066531B (en) Risk prediction method, risk prediction device, computer equipment and storage medium
CN114664458A (en) Patient classification device, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant