US20230352164A1 - Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same - Google Patents

Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same Download PDF

Info

Publication number
US20230352164A1
US20230352164A1 US18/218,016 US202318218016A US2023352164A1 US 20230352164 A1 US20230352164 A1 US 20230352164A1 US 202318218016 A US202318218016 A US 202318218016A US 2023352164 A1 US2023352164 A1 US 2023352164A1
Authority
US
United States
Prior art keywords
data
vital signs
machine learning
subject
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/218,016
Inventor
Yeongnam LEE
Yeha LEE
Joonmyoung KWON
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hyewon Medical Foundation
Vuno Inc
Original Assignee
Hyewon Medical Foundation
Vuno Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020170102265A external-priority patent/KR101841222B1/en
Priority claimed from KR1020170106529A external-priority patent/KR101843066B1/en
Application filed by Hyewon Medical Foundation, Vuno Inc filed Critical Hyewon Medical Foundation
Priority to US18/218,016 priority Critical patent/US20230352164A1/en
Assigned to HYEWON MEDICAL FOUNDATION, VUNO, INC. reassignment HYEWON MEDICAL FOUNDATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, Yeha, LEE, Yeongnam, KWON, Joonmyoung
Publication of US20230352164A1 publication Critical patent/US20230352164A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Definitions

  • Example embodiments relate to a method of generating a prediction result for early predicting an occurrence of fatal symptoms of a subject, a method of classifying data using data augmentation for machine learning, and a computing apparatus using the methods.
  • cardiac arrest accounts for 20 to 30% of survival discharge rates. Since such cardiac arrest has a change in vital signs before its occurrence, early prediction thereof is possible.
  • a prediction method according to the related art mainly depends on experience or knowledge of a medical specialist, for example, a nurse or a doctor in charge. Therefore, the risk of the subject or the patient may be evaluated quiet differently depending on an individual's capability. Also, there is a shortage of emergency medical staff for such an evaluation itself.
  • the risk of fatal symptoms is determined by assigning scores according to a conventional rule-based vital sign value.
  • a conventional rule-based vital sign value assigns scores according to a conventional rule-based vital sign value.
  • the false negative refers to a case of not predicting the risk of a patient of which actual fatal symptoms are to occur
  • the false positive refers to a case of predicting an occurrence of actual fatal symptoms even though the fatal symptoms do not occur.
  • a case of predicting cardiac arrest of a patient corresponds to a 2-class classification in which a series of vital signs acquired from the patient are classified into vital signs (first class) corresponding to cardiac arrest or classified into vital signs (second class) of a normal patient not having cardiac arrest, through a machine learning using the series of vital signs as learning data.
  • vital sign data corresponding to cardiac arrest are few. That is, a severe class imbalance of learning data occurs.
  • a machine learning algorithm performs learning with a set of data biased to one class and thus, a resulting classification model may decrease an overall accuracy and may further decrease an accuracy of classifying data corresponding to the other class.
  • a classification it is important to match a class (a minority class, the first class in the above example) to which a minority of data belongs as well as a class (a majority class, the second class in the above example) to which a majority of data belongs. Therefore, the aforementioned issue needs to be outperformed.
  • a fatal symptoms early prediction result generating method that may early predict fatal symptoms further accurately compared to a conventional method.
  • a method of improving a conventional GAN as a method of increasing an accuracy of a classification model through a machine learning by generating data similar to true data, that is, by performing an effective data augmentation as a method of overcoming a severe class imbalance of data.
  • Example embodiments are to detect a patient with imminent fatal symptoms, such as cardiac arrest, and to increase a survival rate of the patient by reducing existing high false negatives.
  • example embodiments are to increase an accuracy of a classification model through existing machine learning using data augmentation of generating a minority class of learning data similar to true data although a class imbalance of learning data is high in machine learning.
  • example embodiments are to improve medical treatment conditions by saving unnecessary consultation hours through a decrease in existing high false positives.
  • a method of generating a prediction result for early predicting an occurrence of fatal symptoms of a subject including: (a) acquiring, by a computing apparatus, or supporting another apparatus interacting with the computing apparatus to acquire vital signs of the subject; (b) converting, by the computing apparatus, or supporting the other apparatus to convert the acquired vital signs to individuation data that is data individuated for the subject; (c) generating, by the computing apparatus, or supporting the other apparatus to generate analysis information about an early prediction of the fatal symptoms from the individuation data based on a machine learning model for the early prediction of the fatal symptoms, and generating or supporting the other apparatus to generate the prediction result as a result of predicting the occurrence of the fatal symptoms during a duration from a point in time t corresponding to a single specific vital sign among the vital signs to a point in time t+n that is a point in time after a desired time interval n by referring to the generated analysis information; and (d) providing, by the computing apparatus
  • the method may further include (e) updating, by the computing apparatus, or supporting the other apparatus to update the machine learning model based on evaluation information about the prediction result.
  • a computing apparatus for generating a prediction result for early predicting an occurrence of fatal symptoms of a subject, the computing apparatus including: a communicator configured to acquire vital signs of the subject; and a processor configured to convert or support another apparatus interacting through the communicator to convert the acquired vital signs to individuation data that is data individuated for the subject.
  • the processor is configured to generate or support the other apparatus to generate analysis information about an early prediction of the fatal symptoms from the individuation data based on a machine learning model for the early prediction of the fatal symptoms, generate or support the other apparatus to generate the prediction result as a result of predicting the occurrence of the fatal symptoms during a duration from a point in time t corresponding to a single specific vital sign among the vital signs to a point in time t+n that is a point in time after a desired time interval n by referring to the generated analysis information, and provide the generated prediction result to an external entity.
  • the processor may be configured to update or support the other apparatus to update the machine learning model based on evaluation information about the prediction result.
  • a method of classifying data using data augmentation for a machine learning including: (a) acquiring, by a computing apparatus, or supporting another apparatus interacting with the computing apparatus to acquire true data; (b) training, by the computing apparatus, or allowing the other apparatus to train a generator and a discriminator of a modified generative adversarial network (GAN) based on information of a label corresponding to the acquired true data and the true data, wherein, in the modified GAN, the generator includes a sub-generator configured to generate similar data corresponding to each of a plurality of labels, the sub-generator is configured to generate similar data belonging to a label corresponding to the sub-generator, and the discriminator is configured to predict a specific label that is a label corresponding to data to be discriminated by the discriminator among the plurality of labels; (c) training, by the computing apparatus, or allowing the other apparatus to train a machine learning model by generating the similar data using the trained modified GAN and
  • GAN modified generative adversarial network
  • a computing apparatus for classifying data using data augmentation for a machine learning
  • the computing apparatus including: a communicator configured to acquire true data; and a processor configured to train or allow another apparatus interacting through the communicator to train a generator and a discriminator of a modified generative adversarial network (GAN) based on information of a label corresponding to the acquired true data and the true data
  • GAN modified generative adversarial network
  • the generator includes a sub-generator configured to generate similar data corresponding to each of a plurality of labels
  • the sub-generator is configured to generate similar data belonging to a label corresponding to the sub-generator
  • the discriminator is configured to predict a specific label that is a label corresponding to data to be discriminated by the discriminator among the plurality of labels.
  • the processor is configured to train or allow the other apparatus to train a machine learning model by generating the similar data using the trained modified GAN and by using (i) the true data and the similar data or (ii) the similar data as learning data of a predetermined machine learning model for classification, and, in response to acquiring data to be classified, generate or support the other apparatus to generate classification information of the data to be classified by classifying the data to be classified based on the trained machine learning model.
  • a computer program stored in media including instructions that cause a computing apparatus to perform the method.
  • FIG. 1 illustrates an example of describing a recurrent neural network (RNN) that is a machine learning model according to an example embodiment.
  • RNN recurrent neural network
  • FIG. 2 is a diagram illustrating an example of a computing apparatus configured to perform a method of generating a prediction result for early predicting an occurrence of fatal symptoms of a subject (hereinafter, also referred to as a “fatal symptoms early prediction result generating method”) and a method of classifying data using a data augmentation (hereinafter, also referred to as a “data classification method”) according to an example embodiment.
  • FIG. 3 is a diagram illustrating an example of hardware and software architectures of a computing apparatus configured to perform a fatal symptoms early prediction result generating method according to an example embodiment.
  • FIG. 4 is a flowchart illustrating an example of a fatal symptoms early prediction result generating method according to an example embodiment.
  • FIG. 5 illustrates an example of a method of performing a data augmentation using a modified generative adversarial network (GAN) according to an example embodiment.
  • GAN modified generative adversarial network
  • vitamin signs when used in the detailed description and claims should not be interpreted to be limited to a general meaning of a measurement value, such as a body temperature, an electrocardiogram (ECG), a respiration, a pulse rate, a blood pressure, an oxygen saturation, a skin conductivity, and the like, of a subject and should be understood to include electroencephalography (EEG) signals, an amount of specific substance among biological samples acquirable through other measurements, a concentration, and the like.
  • EEG electroencephalography
  • biological samples should be understood as various kinds of substances that may be collected from the subject, for example, blood, serum, urine, lymph, cerebrospinal fluid, saliva, semen, vaginal fluid, etc., of the subject.
  • training/learning when used in the detailed description and claims refers to performing a machine learning through computing according to a procedure and can be understood by those skilled in the art that the term is not intended to refer to mental acts, such as human educational activities.
  • the disclosure may include any possible combinations of example embodiments described herein. It should be understood that, although various example embodiments differ from each other, they do not need to be exclusive. For example, a specific shape, structure, and feature described herein may be implemented as another example embodiment without departing from the spirit and scope of the disclosure. Also, it should be understood that a position or an arrangement of an individual component of each disclosed example embodiment may be modified without departing from the spirit and scope of the disclosure. Accordingly, the following detailed description is not to be construed as being limiting and the scope of the disclosure, if properly described, is limited by the claims, their equivalents, and all variations within the scope of the claims. In the drawings, like reference numerals refer to like elements throughout.
  • FIG. 1 illustrates an example of describing a recurrent neural network (RNN) that is a machine learning model according to an example embodiment.
  • RNN recurrent neural network
  • a deep neural network model may be briefly described to be in a form in which artificial neural networks are stacked in multiple layers. That is, a deep structured neural network may be represented as a deep neural network or a DNN that is a network in a deep structure, and, referring to FIG. 1 , may be trained using a method of automatically learning features of vital signs and a relationships between the vital signs through learning of a large amount of data in a structure that includes a multilayered network and, through this, reducing an error of an objective function, that is, a prediction accuracy of fatal symptoms. It is also expressed as a concatenation between nerve cells of the human brain and the deep neural network (DNN) is becoming a next generation model of artificial intelligence (AI).
  • AI artificial intelligence
  • a recurrent neural network may be used to analyze sequentially input data as shown in FIG. 1 .
  • the RNN model is in a structure to detect a feature of data according to a time sequence and to selectively apply a main feature to be referred to when analyzing a current point in time among features of previous points in times. For example, referring to FIG. 1 , when analyzing data input at a point in time t+1, the data may be analyzed by learning main features analyzed at points in times t ⁇ 1 and t. As described above, according to example embodiments, it is possible to extract a change in vital signs over time using a structure of the RNN and to use the extracted change for prediction of fatal symptoms.
  • the RNN unfolded according to time-series order, time flow, or time axis may be understood as a DNN having an infinite number of layers.
  • x t denotes an input vector at the point in time t
  • s t denotes a hidden state (i.e., memory of a neural network) at the point in time t.
  • y is indicated with o in FIG. 1 .
  • f denotes an activation function (e.g., tanh( ) and ReLU function) and U, V, and W denote parameters of a neural network.
  • U, V, and W refer to parameters that are shared equally across stages of all of the points in times in the RNN, which differs from those in a feedforward neural network.
  • g denotes an activation function (typically, a softmax function) for an output layer
  • y denotes an output vector of the neural network at the point in time t.
  • FIG. 2 is a diagram illustrating an example of a computing apparatus configured to perform a fatal symptoms early prediction result generating method according to an example embodiment.
  • a computing apparatus 200 includes a communicator 210 and a processor 220 , and may directly or indirectly communicate with an external computing apparatus (not shown) through the communicator 210 .
  • the computing apparatus 200 may achieve a desired system performance using a combination of typical computer hardware (e.g., an apparatus including a computer processor, a memory, a storage, an input device and an output device, components of other existing computing apparatuses, etc.; an electronic communication apparatus such as a router, a switch, etc.; an electronic information storage system such as a network attachment storage (NAS) and a storage area network (SAN)) and computer software (i.e., instructions that enable a computing apparatus to function in a specific manner).
  • typical computer hardware e.g., an apparatus including a computer processor, a memory, a storage, an input device and an output device, components of other existing computing apparatuses, etc.
  • an electronic communication apparatus such as a router, a switch, etc.
  • an electronic information storage system such as a network attachment storage (NAS) and a storage area network (SAN)
  • NAS network attachment storage
  • SAN storage area network
  • the communicator 210 of the computing apparatus 200 may transmit and receive a request and a response with another interacting computing apparatus.
  • the request and the response may be implemented using the same transmission control protocol (TCP) session.
  • TCP transmission control protocol
  • the request and the response may be transmitted and received as a user datagram protocol (UDP) datagram.
  • the communicator 210 may include a keyboard, a mouse, and other external input devices to receive a command or an instruction, etc.
  • the processor 220 of the computing apparatus 200 may include a hardware configuration, such as a data bus, a micro processing unit (MPU) or a central processing unit (CPU), a cache memory, and the like. Also, the processor 220 may further include a software configuration of an application that performs a specific object, an operating system (OS), and the like.
  • a hardware configuration such as a data bus, a micro processing unit (MPU) or a central processing unit (CPU), a cache memory, and the like.
  • the processor 220 may further include a software configuration of an application that performs a specific object, an operating system (OS), and the like.
  • OS operating system
  • FIG. 3 is a diagram illustrating an example of hardware and software architectures of a computing apparatus configured to perform a fatal symptoms early prediction result generating method according to an example embodiment.
  • the computing apparatus 200 may include a vital sign acquisition module 310 as a component.
  • the vital sign acquisition module 310 may be implemented through an interaction with the communicator 210 included in the computing apparatus 200 , or an interaction between the communicator 210 and the processor 220 .
  • the vital sign acquisition module 310 may acquire vital signs of a subject.
  • vital signs may be acquired from an electronic medical record (EMR) of the subject.
  • EMR electronic medical record
  • the acquired vital signs may be forwarded to an individuation module 320 .
  • the individuation module 320 converts the vital signs to individuation data that is data individuated or personalized for the subject.
  • the vital signs may be acquired from the subject that is a human ( Homo sapiens ).
  • a human Homo sapiens
  • those skilled in the art may understand that they are not limited thereto. That is, “individuation” may be performed with respect to a specific animal subject in correspondence to performing “personalization” with respect to a specific human subject.
  • Such conversion is performed since a normal vital sign differs from a vital sign just before an occurrence of fatal symptoms for each subject. For example, one subject may breathe 45 times per minute when normal, while another subject may breathe 45 times just before cardiac arrest. As described above, the acquired vital signs need to be modified to fit an individual subject instead of being simply used.
  • the individuation module 320 may convert the vital signs to the individuation data by calculating a deviation of vital signs of an entire time duration by subtracting an average of vital signs of an initial desired time duration among the vital signs from the vital signs of the entire time duration, and by calculating a standard score (z-score) of the vital signs of the entire time duration of the subject as the individuation data by referring to an average and a variance of vital signs of a plurality of other subjects.
  • z-score standard score
  • a difference from an average that is, a deviation may be calculated using a value acquired by subtracting an average of the vital signs of the 10 hours from vital signs of remaining hours.
  • a z-score for the entire vital signs of the subject may be acquired by calculating an average and a variance of vital signs with respect to all of the subjects. In this manner, a subsequent vital sign may be considered as a value relatively deviated from normal vital signs of each subject rather than using an absolute value.
  • analysis information about the early prediction may be generated.
  • a prediction module 340 may generate a prediction result about an occurrence probability of fatal symptoms by a specific point in time based on the analysis information. A process of generating the analysis information and the prediction result is further described below.
  • An update module and learning module 350 may function to pretrain a machine learning model to be used in response to an occurrence of fatal symptoms of the subject by performing the method according to the example embodiments or to update the machine learning model based on evaluation information about the prediction result according to performing of the method.
  • FIG. 4 is a flowchart illustrating an example of a fatal symptoms early prediction result generating method according to an example embodiment.
  • a prediction result generating method includes operation S 410 of acquiring, by the vital sign acquisition module 310 implemented by the communicator 210 of the computing apparatus 200 , vital signs of the subject.
  • the vital signs may be time series vital signs from a point in time tO to a point in time t.
  • the vital signs may be vital signs at a single point in time. Those skilled in the art may understand from this specification that a prediction result about fatal symptoms may be generated even with respect to vital signs at a single point in time.
  • the prediction result generating method further includes operation S 420 of converting, by the individuation module 320 implemented by the processor 220 of the computing apparatus 200 , or supporting another apparatus interacting through the communicator 210 to convert the acquired vital signs to individuation data that is data individuated for the subject.
  • the individuation module 320 may calculate a deviation of vital signs of an entire time duration by subtracting an average of vital signs of an initial desired time duration among the vital signs from the vital signs of the entire time duration, and may calculate a standard score (z-score) of the vital signs of the entire time duration of the subject as the individuation data by referring to an average and a variance of vital signs of a plurality of other subjects.
  • a method of generating individuation data by converting to fit a characteristic of a patient for each subject is not limited to the aforementioned method.
  • the fatal symptoms early prediction result generating method further includes operation S 432 of generating or supporting the other apparatus to generate analysis information about an early prediction of the fatal symptoms from the individuation data based on a machine learning model for the early prediction of the fatal symptoms and operation S 434 of generating or supporting the other apparatus to generate the prediction result as a result of predicting the occurrence of the fatal symptoms during a duration from a point in time t corresponding to a single specific vital sign among the vital signs to a point in time t+n that is a point in time after a desired time interval n by referring to the generated analysis information.
  • a recurrent neural network (RNN) model may be included as an analysis model in the machine learning model.
  • the analysis modules 330 a and 330 b that implement the analysis model may be executed by the processor 220 .
  • the x t denotes the individuation data that is an input vector at the point in time t or a value processed from the individuation data.
  • the value processed from the individuation data may refer to, for example, a variance (from a previous point in time to a current point in time) of the individuation data or an amount of change in the variance.
  • the s t denotes a hidden state corresponding to a memory of the RNN model at the point in time t
  • the s t-1 denotes the hidden state at the point in time t ⁇ 1
  • the U, V, and W denote neural network parameters shared equally across all the points in times of the RNN model
  • the f denotes a predetermined first activation function that is selected to calculate the hidden state
  • the y denotes an output layer that is a latent feature according to the RNN model at the point in time t as the analysis information
  • the g denotes a predetermined second activation function that is selected to calculate the output layer.
  • the first analysis module 330 a between the analysis modules 330 a and 330 b functions to apply a relationship between the individuation data by referring to individuation data at the point in time t, and may correspond to, for example, Ux t in the RNN model.
  • the second analysis module 330 b between the analysis modules 330 a and 330 b functions to apply a change in the individuation data over time by referring to individuation data up to the point in time t ⁇ 1 and may correspond to, for example, Ws t-1 in the RNN model.
  • the first activation function f may be a tanh( ) or ReLU function generally used.
  • the second activation function g may be a softmax function generally used. The first activation function and the second activation function may be selectively applied based on each purpose and complexity of calculation.
  • the machine learning model may further include a second neural network model including at least one fully connected layer for calculating an occurrence probability of the fatal symptoms from the output layer (y) as at least a portion of a prediction model.
  • the prediction module 340 that implements the prediction model may be executed by the processor 220 .
  • operation S 405 of pretraining the machine learning model may be required.
  • an updating and learning module 350 or the learning module may be executed by the processor 220 .
  • the updating and learning module 350 may train the RNN model through backpropagation through time (BPTT) of using individual individuation data for a plurality of existing subjects as learning data. Through this, the U, V and W may be determined.
  • BPTT backpropagation through time
  • the updating and learning module 350 may train the second neural network model through backpropagation of using individual individuation data for a plurality of existing subjects and an occurrence/non-occurrence of fatal symptoms at each point in time of the existing subjects as learning data.
  • the processor 220 may acquire or support the other apparatus to acquire the output layer at the point in time t+n as the analysis information by using the individuation data or the value processed from the individuation data as an input of the analysis modules 330 a and 330 b in operation S 432 .
  • the processor 220 may generate or support the other apparatus to generate an occurrence probability of the fatal symptoms up to the point in time t+n as the prediction result by using the output layer at the point in time t+n as an input of the prediction module 340 .
  • the prediction result generating method further includes operation S 440 of providing, by the processor 220 , the generated prediction result to an external entity.
  • the external entity may include a user of the computing apparatus 200 , an administrator, a medical specialist in charge of the subject, and the like.
  • any entity capable of acquiring the prediction result should be understood to be included.
  • the prediction result may be provided to the external entity.
  • the fatal symptoms early prediction result generating method may provide the prediction result about the fatal symptoms based on the pretrained machine learning model. If evaluation information about the prediction result is used as data to retrain the machine learning model, the machine learning model may perform a further accurate prediction. Therefore, the prediction result generating method according to example embodiments may further include operation S 450 of updating, by the processor 220 , or supporting the other apparatus to update the machine learning model based on evaluation information about the prediction result.
  • individuation data (acquired from vital signs) not used for previous learning may be further considered and an error found in the previous learning may be corrected. Therefore, the accuracy of the machine learning model may be enhanced. As data accumulates, the performance of machine learning continues to improve.
  • the evaluation information about the prediction result may be provided from the external entity, such as the medical specialist, etc.
  • Such update refers to proceeding with learning again based on newly provided data and thus, may be substantially identical to the aforementioned operation S 405 . That is, the analysis model and the prediction model using the analysis modules 330 a and 330 b and the prediction module 340 are modified by considering the accuracy of prediction based on the evaluation information about the prediction result. In more detail, parameters, for example, the U, V, W, etc., used for the analysis model and the prediction model are modified.
  • the example embodiments may further quickly and conveniently predict fatal symptoms compared to a conventional method of predicting fatal symptoms, such as cardiac arrest, sepsis, etc., generally depending on experience or knowledge of medical specialists.
  • Advantages of the example embodiments described herein may significantly reduce burden of medical specialists in busy medical clinical conditions that they need to make an accurate determination and prediction based on a large amount of diagnostic data a day.
  • a deep neural network DNN
  • the risk of a patient related to fatal symptoms acquired by a doctor only after years of training may be analyzed and learned using a computing apparatus itself based on a large amount of learning data. Accordingly, assistance may be made to determine cases that a human medical specialist may overlook or cases in which it is difficult to predict fatal symptoms.
  • only patients of which fatal symptoms are suspected to occur based on the automatically generated prediction information, for example, by a predetermined point in time may be screened and medical staff may need to verify the screened patients. Accordingly, it is possible to improve the accuracy and speed of prediction of fatal symptoms.
  • FIG. 5 illustrates an example of a method of performing a data augmentation using a modified generative adversarial network (GAN) according to an example embodiment.
  • GAN modified generative adversarial network
  • the example embodiment relates to outperforming a class imbalance issue found in the conventional machine learning, that is, to enhancing a reliability and accuracy of a machine learning model since the conventional machine learning is performed mainly based on data belonging to a majority class due to a class imbalance.
  • the modified GAN may include a generator (indicated with ‘G’) configured to generate vital signs of fatal symptoms similar to reality and a discriminator (indicated with ‘D’) configured to discriminate true data from generated data.
  • G generator
  • D discriminator
  • non-patent document 1 [Goodfellow, Ian J.; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). “Generative Adversarial Networks”]
  • a generator is configured to generate data similar to true data to deceive a discriminator, such that the discriminator may determine the similar data as the true data, and the discriminator is configured to discriminate the true data from the generated similar data.
  • each of the generator and the discriminator updates a network weight to achieve each corresponding purpose. Accordingly, after sufficient learning, the generator may generate data similar to true data and a discrimination rate by the discriminator may converge theoretically to 0.5.
  • the generator sufficiently trained through the conventional GAN may generate data close to true data, the aforementioned class imbalance issue of data may be overcome by using the similar data generated by the generator of the conventional GAN as learning data for training the machine learning model.
  • the conventional GAN focuses on learning simply depending whether the input data is true data or fake data. Thus, if the input data may belong to various labels, the conventional GAN may not readily verify a label corresponding to the input data among the various labels.
  • the modified GAN modified from the conventional GAN is used herein. Therefore, various kinds (i.e., kinds classified by labels) of data may be generated and determined by further considering information of a label related to data using the modified GAN.
  • the generator of the modified GAN may include sub-generators (indicated with ‘G_label1’, ‘G_label2’, and ‘G_label3’ in FIG. 5 ) each configured to generate similar data corresponding to a plurality of labels, respectively. Each sub-generator generates similar data belonging to a label corresponding to the sub-generator.
  • the discriminator of the modified GAN predicts and discriminates a label corresponding to data to be determined by the discriminator instead of simply determining whether the data to be determined by the discriminator is true data or fake data.
  • the modified GAN may support data generated by the generator to become similar data close to true data, and may support the discriminator to verify a label corresponding to the true data or the similar data input to the discriminator, such that the generator may generate a specific label through the sub-generator.
  • the modified GAN according to the example embodiment may be trained based on true data and a kind of a label corresponding to the true data and accordingly, may generate various kinds of similar data classified by labels to be close to true data.
  • the modified GAN may replenish an insufficient quantity by generating similar data corresponding to the specific label and thereby solve a data imbalance issue about the specific label.
  • the data classification method includes operation S 100 ′ of acquiring, by the computing apparatus 200 through the communicator 210 , or supporting another apparatus interacting with the computing apparatus 200 to acquire true data.
  • true data may be a time series signal, however, without being limited thereto, any data to be classified may be included in the true data.
  • the data classification method further includes operation S 200 ′ of training, by the computing apparatus 200 through the processor 210 , or allowing the other apparatus interacting through the communicator 210 to train the generator and the discriminator of the modified GAN based on label information of a label corresponding to the acquired true data and the true data.
  • operation S 200 ′ of training by the computing apparatus 200 through the processor 210 , or allowing the other apparatus interacting through the communicator 210 to train the generator and the discriminator of the modified GAN based on label information of a label corresponding to the acquired true data and the true data.
  • the data classification method further includes operation S 300 ′ of training, by the computing apparatus 200 through the processor 220 , or allowing the other apparatus to train the machine learning model by generating the similar data using the trained modified GAN and by using (i) the true data and the similar data or (ii) the similar data as learning data of a predetermined machine learning model for classification.
  • CNN convolutional neural network
  • RNN recurrent neural network
  • the data classification method further includes operation S 400 ′ of, in response to acquiring data to be classified by the communicator 210 of the computing apparatus 200 , generating, by the computing apparatus 200 through the processor 220 , or supporting the other apparatus to generate classification information of the data to be classified by classifying the data to be classified based on the machine learning model.
  • the data classification method may further include operation S 500 ′ of providing, by the computing apparatus 200 through the processor 220 , or supporting the other apparatus to provide the classification information to an external entity.
  • the external entity may include a user of the computing apparatus, an administrator, and the like.
  • any entity having a right to acquire the classification information may be included.
  • the data classification method may further include operation S 600 ′ of updating, by the computing apparatus 200 , or supporting the other apparatus to update the machine learning model based on evaluation information about an accuracy of the classification information.
  • the aforementioned data classification method may provide classification information about data to be classified based on a predetermined machine learning model. Therefore, in the case of using evaluation information about an accuracy of classification information as retraining data, a further accurate prediction may be performed. Accordingly, the data classification method may further include operation S 600 ′ of updating, by the computing apparatus 200 through the processor 220 , or supporting the other apparatus to update at least one of the machine learning model and the modified GAN based on evaluation information about the classification information.
  • operation S 600 ′ may include an operation of directly updating, by the computing apparatus 200 , or supporting the other apparatus to update the machine learning model based on the evaluation information about the classification information or an operation of indirectly updating or supporting the other apparatus to update the machine learning model using data generated by the generator of the modified GAN by training the modified GAN based on the evaluation information about the prediction information.
  • learning data not used for previous learning may be further considered and an error found in the previous learning may be corrected. Therefore, the accuracy of the modified GAN may be enhanced. As data accumulates, the classification performance of the machine learning model continues to improve. Also, according to example embodiments, errors in true data being provided may be significantly reduced and a class imbalance of learning data used to train the machine learning model may be solved. Therefore, the reliability of the trained machine learning model may be improved.
  • a class imbalance of learning data is high in machine learning, it is possible to enhance the accuracy of a classification model by existing machine learning through a data augmentation of generating learning data of a minority class similar to true data.
  • Examples of the media may include magnetic media such as hard disks, floppy disks, and magnetic tapes; optical media such as CD-ROM discs and DVDs; magneto-optical media such as floptical disks; and hardware devices that are specially configured to store and perform program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions may include a machine code, such as produced by a compiler and higher language code that may be executed by a computer using an interpreter.
  • the hardware devices may be configured to operate as at least one software module to perform processing of the example embodiments, or vice versa.
  • the hardware devices may include a processor, such as a CPU or a GPU, configured to combine with a memory, such as ROM/RAM, for storing program instructions and to execute the instructions stored in the memory, and may include a communicator configured to transmit and receive signals to and from an external apparatus.
  • the hardware devices may include a keyboard, a mouse, and other external input devices for receiving instructions produced by developers.
  • a prediction performance may be continuously improved by using a method of this disclosure.
  • a data classification method using data augmentation may apply to many medical determination situations and thus, may enhance a low accuracy of a classification model caused by minority data related to cardiac arrest, sepsis, etc., since most subjects are subjects not showing corresponding symptoms.
  • the equally or equivalent modifications may include, for example, a logically equivalent method that may achieve the same result as one acquired by implementing the method according to this disclosure.

Abstract

The present invention relates to a method for generating a prediction result for predicting an occurrence of fatal symptoms of a subject in advance, a method for performing data classification by using data augmentation in mechanical learning for the same, and a computing device using the same. Particularly, the computing device according to the present invention acquires vital signs of the subject, converts the same into individuated data, generates analysis information from the individuated data on the basis of a machine learning model, generates a prediction result by referring to the analysis information, and provides the prediction result to an external entity.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a Continuation Application of U.S. patent application Ser. No. 16/638,250, filed on Feb. 11, 2020, now pending, which is a National Stage Application of International Application No. PCT/KR2018/008918, filed on Aug. 7, 2018, which claims priority of Korean Patent Application Nos. 10-2017-0102265, filed on Aug. 11, 2017, and 10-2017-0106529, filed on Aug. 23, 2017. The contents of which are all incorporated by references herein in their entireties.
  • BACKGROUND Technical Field
  • Example embodiments relate to a method of generating a prediction result for early predicting an occurrence of fatal symptoms of a subject, a method of classifying data using data augmentation for machine learning, and a computing apparatus using the methods.
  • Related Art
  • In medical clinical trials, an occurrence of fatal symptoms of a subject, for example, a patient, such as cardiac arrest and sepsis, may significantly decrease survival discharge. For example, cardiac arrest accounts for 20 to 30% of survival discharge rates. Since such cardiac arrest has a change in vital signs before its occurrence, early prediction thereof is possible. However, a prediction method according to the related art mainly depends on experience or knowledge of a medical specialist, for example, a nurse or a doctor in charge. Therefore, the risk of the subject or the patient may be evaluated quiet differently depending on an individual's capability. Also, there is a shortage of emergency medical staff for such an evaluation itself.
  • Briefly describing the prediction method according to the related art, the risk of fatal symptoms, such as cardiac arrest, is determined by assigning scores according to a conventional rule-based vital sign value. Here, there is a high rate of false negative or false positive. Here, the false negative refers to a case of not predicting the risk of a patient of which actual fatal symptoms are to occur and the false positive refers to a case of predicting an occurrence of actual fatal symptoms even though the fatal symptoms do not occur.
  • In reality, an imbalance of data becomes an issue in classifying a class of data through a machine learning.
  • For example, a case of predicting cardiac arrest of a patient corresponds to a 2-class classification in which a series of vital signs acquired from the patient are classified into vital signs (first class) corresponding to cardiac arrest or classified into vital signs (second class) of a normal patient not having cardiac arrest, through a machine learning using the series of vital signs as learning data. However, since many subjects are subjects not showing corresponding symptoms, vital sign data corresponding to cardiac arrest are few. That is, a severe class imbalance of learning data occurs.
  • That is, a machine learning algorithm performs learning with a set of data biased to one class and thus, a resulting classification model may decrease an overall accuracy and may further decrease an accuracy of classifying data corresponding to the other class. In a classification, it is important to match a class (a minority class, the first class in the above example) to which a minority of data belongs as well as a class (a majority class, the second class in the above example) to which a majority of data belongs. Therefore, the aforementioned issue needs to be outperformed.
  • Accordingly, proposed herein is a fatal symptoms early prediction result generating method that may early predict fatal symptoms further accurately compared to a conventional method. Also, to this end, there is proposed a method of improving a conventional GAN as a method of increasing an accuracy of a classification model through a machine learning by generating data similar to true data, that is, by performing an effective data augmentation as a method of overcoming a severe class imbalance of data.
    • (Reference document) Non-patent document 1: Goodfellow, Ian J.; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). “Generative Adversarial Networks”
    SUMMARY Technical Subject
  • Example embodiments are to detect a patient with imminent fatal symptoms, such as cardiac arrest, and to increase a survival rate of the patient by reducing existing high false negatives.
  • Also, example embodiments are to increase an accuracy of a classification model through existing machine learning using data augmentation of generating a minority class of learning data similar to true data although a class imbalance of learning data is high in machine learning.
  • Also, example embodiments are to improve medical treatment conditions by saving unnecessary consultation hours through a decrease in existing high false positives.
  • Technical Solution
  • Characteristic constitutions of this disclosure to accomplish the aforementioned objectives and to achieve characteristic effects of the disclosure are as follows:
  • According to an aspect of example embodiments, there is provided a method of generating a prediction result for early predicting an occurrence of fatal symptoms of a subject, the method including: (a) acquiring, by a computing apparatus, or supporting another apparatus interacting with the computing apparatus to acquire vital signs of the subject; (b) converting, by the computing apparatus, or supporting the other apparatus to convert the acquired vital signs to individuation data that is data individuated for the subject; (c) generating, by the computing apparatus, or supporting the other apparatus to generate analysis information about an early prediction of the fatal symptoms from the individuation data based on a machine learning model for the early prediction of the fatal symptoms, and generating or supporting the other apparatus to generate the prediction result as a result of predicting the occurrence of the fatal symptoms during a duration from a point in time t corresponding to a single specific vital sign among the vital signs to a point in time t+n that is a point in time after a desired time interval n by referring to the generated analysis information; and (d) providing, by the computing apparatus, or supporting the other apparatus to provide the generated prediction result to an external entity.
  • Desirably, the method may further include (e) updating, by the computing apparatus, or supporting the other apparatus to update the machine learning model based on evaluation information about the prediction result.
  • According to another aspect of example embodiments, there is provided a computing apparatus for generating a prediction result for early predicting an occurrence of fatal symptoms of a subject, the computing apparatus including: a communicator configured to acquire vital signs of the subject; and a processor configured to convert or support another apparatus interacting through the communicator to convert the acquired vital signs to individuation data that is data individuated for the subject. The processor is configured to generate or support the other apparatus to generate analysis information about an early prediction of the fatal symptoms from the individuation data based on a machine learning model for the early prediction of the fatal symptoms, generate or support the other apparatus to generate the prediction result as a result of predicting the occurrence of the fatal symptoms during a duration from a point in time t corresponding to a single specific vital sign among the vital signs to a point in time t+n that is a point in time after a desired time interval n by referring to the generated analysis information, and provide the generated prediction result to an external entity.
  • Desirably, the processor may be configured to update or support the other apparatus to update the machine learning model based on evaluation information about the prediction result.
  • According to another aspect of example embodiments, there is provided a method of classifying data using data augmentation for a machine learning, the method including: (a) acquiring, by a computing apparatus, or supporting another apparatus interacting with the computing apparatus to acquire true data; (b) training, by the computing apparatus, or allowing the other apparatus to train a generator and a discriminator of a modified generative adversarial network (GAN) based on information of a label corresponding to the acquired true data and the true data, wherein, in the modified GAN, the generator includes a sub-generator configured to generate similar data corresponding to each of a plurality of labels, the sub-generator is configured to generate similar data belonging to a label corresponding to the sub-generator, and the discriminator is configured to predict a specific label that is a label corresponding to data to be discriminated by the discriminator among the plurality of labels; (c) training, by the computing apparatus, or allowing the other apparatus to train a machine learning model by generating the similar data using the trained modified GAN and by using (i) the true data and the similar data or (ii) the similar data as learning data of a predetermined machine learning model for classification; and (d), in response to acquiring data to be classified, generating, by the computing apparatus, or supporting the other apparatus to generate classification information of the data to be classified by classifying the data to be classified based on the trained machine learning model. For example, the machine learning model for classification may use data generated by the generator of the modified GAN trained based on true data as learning data.
  • According to another aspect of example embodiments, there is provided a computing apparatus for classifying data using data augmentation for a machine learning, the computing apparatus including: a communicator configured to acquire true data; and a processor configured to train or allow another apparatus interacting through the communicator to train a generator and a discriminator of a modified generative adversarial network (GAN) based on information of a label corresponding to the acquired true data and the true data, wherein, in the modified GAN, the generator includes a sub-generator configured to generate similar data corresponding to each of a plurality of labels, the sub-generator is configured to generate similar data belonging to a label corresponding to the sub-generator, and the discriminator is configured to predict a specific label that is a label corresponding to data to be discriminated by the discriminator among the plurality of labels. The processor is configured to train or allow the other apparatus to train a machine learning model by generating the similar data using the trained modified GAN and by using (i) the true data and the similar data or (ii) the similar data as learning data of a predetermined machine learning model for classification, and, in response to acquiring data to be classified, generate or support the other apparatus to generate classification information of the data to be classified by classifying the data to be classified based on the trained machine learning model.
  • According to another aspect of example embodiments, there is provided a computer program stored in media including instructions that cause a computing apparatus to perform the method.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Example embodiments will be described in more in detail with reference to the following figures that are simply a portion of the example embodiments and those of ordinary skill in the art (hereinafter, “those skilled in the art”) to which this disclosure pertains may readily acquire other figures based on the figures without an inventive work being made:
  • FIG. 1 illustrates an example of describing a recurrent neural network (RNN) that is a machine learning model according to an example embodiment.
  • FIG. 2 is a diagram illustrating an example of a computing apparatus configured to perform a method of generating a prediction result for early predicting an occurrence of fatal symptoms of a subject (hereinafter, also referred to as a “fatal symptoms early prediction result generating method”) and a method of classifying data using a data augmentation (hereinafter, also referred to as a “data classification method”) according to an example embodiment.
  • FIG. 3 is a diagram illustrating an example of hardware and software architectures of a computing apparatus configured to perform a fatal symptoms early prediction result generating method according to an example embodiment.
  • FIG. 4 is a flowchart illustrating an example of a fatal symptoms early prediction result generating method according to an example embodiment.
  • FIG. 5 illustrates an example of a method of performing a data augmentation using a modified generative adversarial network (GAN) according to an example embodiment.
  • DETAILED DESCRIPTION
  • The following detailed description of this disclosure is described with reference to the accompanying drawings in which specific example embodiments of the disclosure are illustrated as examples, to fully describe purposes, technical solutions, and advantages of the disclosure. The example embodiments are described in detail enough for those skilled in the art to carry out the disclosure.
  • The term “fatal symptoms” when used in the detailed description and claims and modifications thereof are not limited to cardiac arrest that is one example of targets to which the disclosure applies and should be understood as a concept including any kind of clinical phenomena that may cause a great risk in a subject's life due to time series changes, such as sepsis, etc.
  • Also, the term “vital signs” when used in the detailed description and claims should not be interpreted to be limited to a general meaning of a measurement value, such as a body temperature, an electrocardiogram (ECG), a respiration, a pulse rate, a blood pressure, an oxygen saturation, a skin conductivity, and the like, of a subject and should be understood to include electroencephalography (EEG) signals, an amount of specific substance among biological samples acquirable through other measurements, a concentration, and the like.
  • Here, “biological samples” should be understood as various kinds of substances that may be collected from the subject, for example, blood, serum, urine, lymph, cerebrospinal fluid, saliva, semen, vaginal fluid, etc., of the subject.
  • The term “training/learning” when used in the detailed description and claims refers to performing a machine learning through computing according to a procedure and can be understood by those skilled in the art that the term is not intended to refer to mental acts, such as human educational activities.
  • Also, the terms “comprises/includes” when used in the detailed description and claims and modifications thereof are not intended to exclude other technical features, additions, components, or operations. Those skilled in the art may clearly understand a portion of other purposes, advantages, and features of the disclosure from this specification and another portion thereof from implementations of the disclosure. The following examples and drawings are provided as examples only and not to limit the disclosure.
  • Further, the disclosure may include any possible combinations of example embodiments described herein. It should be understood that, although various example embodiments differ from each other, they do not need to be exclusive. For example, a specific shape, structure, and feature described herein may be implemented as another example embodiment without departing from the spirit and scope of the disclosure. Also, it should be understood that a position or an arrangement of an individual component of each disclosed example embodiment may be modified without departing from the spirit and scope of the disclosure. Accordingly, the following detailed description is not to be construed as being limiting and the scope of the disclosure, if properly described, is limited by the claims, their equivalents, and all variations within the scope of the claims. In the drawings, like reference numerals refer to like elements throughout.
  • Unless the context clearly indicates otherwise, the singular forms “a”, “an”, and “the”, are intended to include the plural forms as well. Also, when description related to a known configuration or function is deemed to render the present disclosure ambiguous, the corresponding description is omitted.
  • Hereinafter, example embodiments of the disclosure are described in detail with reference to the accompanying drawings such that those skilled in the art may easily perform the example embodiments.
  • FIG. 1 illustrates an example of describing a recurrent neural network (RNN) that is a machine learning model according to an example embodiment.
  • Referring to FIG. 1 , in a machine learning model used herein, a deep neural network model may be briefly described to be in a form in which artificial neural networks are stacked in multiple layers. That is, a deep structured neural network may be represented as a deep neural network or a DNN that is a network in a deep structure, and, referring to FIG. 1 , may be trained using a method of automatically learning features of vital signs and a relationships between the vital signs through learning of a large amount of data in a structure that includes a multilayered network and, through this, reducing an error of an objective function, that is, a prediction accuracy of fatal symptoms. It is also expressed as a concatenation between nerve cells of the human brain and the deep neural network (DNN) is becoming a next generation model of artificial intelligence (AI).
  • Among DNN models used herein, a recurrent neural network (RNN) may be used to analyze sequentially input data as shown in FIG. 1 . The RNN model is in a structure to detect a feature of data according to a time sequence and to selectively apply a main feature to be referred to when analyzing a current point in time among features of previous points in times. For example, referring to FIG. 1 , when analyzing data input at a point in time t+1, the data may be analyzed by learning main features analyzed at points in times t−1 and t. As described above, according to example embodiments, it is possible to extract a change in vital signs over time using a structure of the RNN and to use the extracted change for prediction of fatal symptoms.
  • For example, the RNN unfolded according to time-series order, time flow, or time axis may be understood as a DNN having an infinite number of layers. In FIG. 1 , xt denotes an input vector at the point in time t and st denotes a hidden state (i.e., memory of a neural network) at the point in time t.
  • Describing again, the RNN of FIG. 1 follows st=f(Uxt+Wst-1) and y=g(Vst). For reference, y is indicated with o in FIG. 1 . Here, f denotes an activation function (e.g., tanh( ) and ReLU function) and U, V, and W denote parameters of a neural network. Here, U, V, and W refer to parameters that are shared equally across stages of all of the points in times in the RNN, which differs from those in a feedforward neural network. Also, g denotes an activation function (typically, a softmax function) for an output layer, and y denotes an output vector of the neural network at the point in time t. A method using the RNN according to an example embodiment is further described below.
  • FIG. 2 is a diagram illustrating an example of a computing apparatus configured to perform a fatal symptoms early prediction result generating method according to an example embodiment.
  • Referring to FIG. 2 , a computing apparatus 200 according to an example embodiment includes a communicator 210 and a processor 220, and may directly or indirectly communicate with an external computing apparatus (not shown) through the communicator 210.
  • In detail, the computing apparatus 200 may achieve a desired system performance using a combination of typical computer hardware (e.g., an apparatus including a computer processor, a memory, a storage, an input device and an output device, components of other existing computing apparatuses, etc.; an electronic communication apparatus such as a router, a switch, etc.; an electronic information storage system such as a network attachment storage (NAS) and a storage area network (SAN)) and computer software (i.e., instructions that enable a computing apparatus to function in a specific manner).
  • The communicator 210 of the computing apparatus 200 may transmit and receive a request and a response with another interacting computing apparatus. As an example, the request and the response may be implemented using the same transmission control protocol (TCP) session. However, it is provided as an example only. For example, the request and the response may be transmitted and received as a user datagram protocol (UDP) datagram. In addition, in a broad sense, the communicator 210 may include a keyboard, a mouse, and other external input devices to receive a command or an instruction, etc.
  • Also, the processor 220 of the computing apparatus 200 may include a hardware configuration, such as a data bus, a micro processing unit (MPU) or a central processing unit (CPU), a cache memory, and the like. Also, the processor 220 may further include a software configuration of an application that performs a specific object, an operating system (OS), and the like.
  • FIG. 3 is a diagram illustrating an example of hardware and software architectures of a computing apparatus configured to perform a fatal symptoms early prediction result generating method according to an example embodiment.
  • Describing a method and a configuration of an apparatus according to an example embodiment with reference to FIG. 3 , the computing apparatus 200 may include a vital sign acquisition module 310 as a component. Those skilled in the art may understand that the vital sign acquisition module 310 may be implemented through an interaction with the communicator 210 included in the computing apparatus 200, or an interaction between the communicator 210 and the processor 220.
  • The vital sign acquisition module 310 may acquire vital signs of a subject. For example, vital signs may be acquired from an electronic medical record (EMR) of the subject. However, it is provided as an example only.
  • The acquired vital signs may be forwarded to an individuation module 320. The individuation module 320 converts the vital signs to individuation data that is data individuated or personalized for the subject. For example, the vital signs may be acquired from the subject that is a human (Homo sapiens). However, those skilled in the art may understand that they are not limited thereto. That is, “individuation” may be performed with respect to a specific animal subject in correspondence to performing “personalization” with respect to a specific human subject.
  • Such conversion is performed since a normal vital sign differs from a vital sign just before an occurrence of fatal symptoms for each subject. For example, one subject may breathe 45 times per minute when normal, while another subject may breathe 45 times just before cardiac arrest. As described above, the acquired vital signs need to be modified to fit an individual subject instead of being simply used.
  • As an example of the conversion to individuate vital signs for the subject if necessary, the individuation module 320 may convert the vital signs to the individuation data by calculating a deviation of vital signs of an entire time duration by subtracting an average of vital signs of an initial desired time duration among the vital signs from the vital signs of the entire time duration, and by calculating a standard score (z-score) of the vital signs of the entire time duration of the subject as the individuation data by referring to an average and a variance of vital signs of a plurality of other subjects.
  • Further describing in detail, with the assumption that vital signs of first 10 hours of a subject are normal, a difference from an average, that is, a deviation may be calculated using a value acquired by subtracting an average of the vital signs of the 10 hours from vital signs of remaining hours. A z-score for the entire vital signs of the subject may be acquired by calculating an average and a variance of vital signs with respect to all of the subjects. In this manner, a subsequent vital sign may be considered as a value relatively deviated from normal vital signs of each subject rather than using an absolute value.
  • When the individuation data that is data individuated for each subject is input to a first analysis module 330 a and a second analysis module 330 b, analysis information about the early prediction may be generated. A prediction module 340 may generate a prediction result about an occurrence probability of fatal symptoms by a specific point in time based on the analysis information. A process of generating the analysis information and the prediction result is further described below.
  • An update module and learning module 350 may function to pretrain a machine learning model to be used in response to an occurrence of fatal symptoms of the subject by performing the method according to the example embodiments or to update the machine learning model based on evaluation information about the prediction result according to performing of the method.
  • Hereinafter, a method of generating a result of early predicting fatal symptoms according to an example embodiment is described with reference to FIG. 4 . FIG. 4 is a flowchart illustrating an example of a fatal symptoms early prediction result generating method according to an example embodiment.
  • Referring to FIG. 4 , a prediction result generating method according to an example embodiment includes operation S410 of acquiring, by the vital sign acquisition module 310 implemented by the communicator 210 of the computing apparatus 200, vital signs of the subject. For example, the vital signs may be time series vital signs from a point in time tO to a point in time t. However, without being limited thereto, the vital signs may be vital signs at a single point in time. Those skilled in the art may understand from this specification that a prediction result about fatal symptoms may be generated even with respect to vital signs at a single point in time.
  • Referring to FIG. 4 , the prediction result generating method further includes operation S420 of converting, by the individuation module 320 implemented by the processor 220 of the computing apparatus 200, or supporting another apparatus interacting through the communicator 210 to convert the acquired vital signs to individuation data that is data individuated for the subject.
  • According to an example embodiment, as described above, in operation S420, the individuation module 320 may calculate a deviation of vital signs of an entire time duration by subtracting an average of vital signs of an initial desired time duration among the vital signs from the vital signs of the entire time duration, and may calculate a standard score (z-score) of the vital signs of the entire time duration of the subject as the individuation data by referring to an average and a variance of vital signs of a plurality of other subjects. Here, those skilled in the art may understand that a method of generating individuation data by converting to fit a characteristic of a patient for each subject is not limited to the aforementioned method.
  • Referring again to FIG. 4 , the fatal symptoms early prediction result generating method further includes operation S432 of generating or supporting the other apparatus to generate analysis information about an early prediction of the fatal symptoms from the individuation data based on a machine learning model for the early prediction of the fatal symptoms and operation S434 of generating or supporting the other apparatus to generate the prediction result as a result of predicting the occurrence of the fatal symptoms during a duration from a point in time t corresponding to a single specific vital sign among the vital signs to a point in time t+n that is a point in time after a desired time interval n by referring to the generated analysis information.
  • According to an example embodiment, a recurrent neural network (RNN) model may be included as an analysis model in the machine learning model. The analysis modules 330 a and 330 b that implement the analysis model may be executed by the processor 220. As described above, the RNN model follows st=f(Uxt+Wst-1) and y=g(Vst). Here, the xt denotes the individuation data that is an input vector at the point in time t or a value processed from the individuation data. Here, the value processed from the individuation data may refer to, for example, a variance (from a previous point in time to a current point in time) of the individuation data or an amount of change in the variance.
  • Also, the st denotes a hidden state corresponding to a memory of the RNN model at the point in time t, the st-1 denotes the hidden state at the point in time t−1, the U, V, and W denote neural network parameters shared equally across all the points in times of the RNN model, the f denotes a predetermined first activation function that is selected to calculate the hidden state, the y denotes an output layer that is a latent feature according to the RNN model at the point in time t as the analysis information, and the g denotes a predetermined second activation function that is selected to calculate the output layer.
  • In detail, the first analysis module 330 a between the analysis modules 330 a and 330 b functions to apply a relationship between the individuation data by referring to individuation data at the point in time t, and may correspond to, for example, Uxt in the RNN model. Also, the second analysis module 330 b between the analysis modules 330 a and 330 b functions to apply a change in the individuation data over time by referring to individuation data up to the point in time t−1 and may correspond to, for example, Wst-1 in the RNN model.
  • Here, the first activation function f may be a tanh( ) or ReLU function generally used. Also, the second activation function g may be a softmax function generally used. The first activation function and the second activation function may be selectively applied based on each purpose and complexity of calculation.
  • Also, according to the example embodiment, the machine learning model may further include a second neural network model including at least one fully connected layer for calculating an occurrence probability of the fatal symptoms from the output layer (y) as at least a portion of a prediction model. The prediction module 340 that implements the prediction model may be executed by the processor 220.
  • Prior to performing the fatal symptoms early prediction result generating method according to example embodiments, operation S405 of pretraining the machine learning model may be required. To this end, an updating and learning module 350, or the learning module may be executed by the processor 220.
  • For learning of the RNN model, the updating and learning module 350 may train the RNN model through backpropagation through time (BPTT) of using individual individuation data for a plurality of existing subjects as learning data. Through this, the U, V and W may be determined.
  • Also, for learning of the second neural network model, the updating and learning module 350 may train the second neural network model through backpropagation of using individual individuation data for a plurality of existing subjects and an occurrence/non-occurrence of fatal symptoms at each point in time of the existing subjects as learning data.
  • Further describing operations S432 and S434 according to the example embodiment, the processor 220 may acquire or support the other apparatus to acquire the output layer at the point in time t+n as the analysis information by using the individuation data or the value processed from the individuation data as an input of the analysis modules 330 a and 330 b in operation S432. In operation S434, the processor 220 may generate or support the other apparatus to generate an occurrence probability of the fatal symptoms up to the point in time t+n as the prediction result by using the output layer at the point in time t+n as an input of the prediction module 340.
  • Referring again to FIG. 4 , the prediction result generating method further includes operation S440 of providing, by the processor 220, the generated prediction result to an external entity. Here, the external entity may include a user of the computing apparatus 200, an administrator, a medical specialist in charge of the subject, and the like. In addition thereto, any entity capable of acquiring the prediction result should be understood to be included.
  • Meanwhile, similar to the example embodiment of FIG. 4 , when the occurrence probability of the fatal symptoms is predicted to be higher than a predetermined probability, the prediction result may be provided to the external entity.
  • The fatal symptoms early prediction result generating method according to example embodiments may provide the prediction result about the fatal symptoms based on the pretrained machine learning model. If evaluation information about the prediction result is used as data to retrain the machine learning model, the machine learning model may perform a further accurate prediction. Therefore, the prediction result generating method according to example embodiments may further include operation S450 of updating, by the processor 220, or supporting the other apparatus to update the machine learning model based on evaluation information about the prediction result. Here, individuation data (acquired from vital signs) not used for previous learning may be further considered and an error found in the previous learning may be corrected. Therefore, the accuracy of the machine learning model may be enhanced. As data accumulates, the performance of machine learning continues to improve.
  • Here, the evaluation information about the prediction result may be provided from the external entity, such as the medical specialist, etc.
  • Such update refers to proceeding with learning again based on newly provided data and thus, may be substantially identical to the aforementioned operation S405. That is, the analysis model and the prediction model using the analysis modules 330 a and 330 b and the prediction module 340 are modified by considering the accuracy of prediction based on the evaluation information about the prediction result. In more detail, parameters, for example, the U, V, W, etc., used for the analysis model and the prediction model are modified.
  • The example embodiments may further quickly and conveniently predict fatal symptoms compared to a conventional method of predicting fatal symptoms, such as cardiac arrest, sepsis, etc., generally depending on experience or knowledge of medical specialists.
  • Advantages of the example embodiments described herein may significantly reduce burden of medical specialists in busy medical clinical conditions that they need to make an accurate determination and prediction based on a large amount of diagnostic data a day. Using technology based on machine learning, particularly, a deep neural network (DNN), the risk of a patient related to fatal symptoms acquired by a doctor only after years of training may be analyzed and learned using a computing apparatus itself based on a large amount of learning data. Accordingly, assistance may be made to determine cases that a human medical specialist may overlook or cases in which it is difficult to predict fatal symptoms. For example, according to the example embodiments, only patients of which fatal symptoms are suspected to occur based on the automatically generated prediction information, for example, by a predetermined point in time may be screened and medical staff may need to verify the screened patients. Accordingly, it is possible to improve the accuracy and speed of prediction of fatal symptoms.
  • FIG. 5 illustrates an example of a method of performing a data augmentation using a modified generative adversarial network (GAN) according to an example embodiment.
  • The example embodiment relates to outperforming a class imbalance issue found in the conventional machine learning, that is, to enhancing a reliability and accuracy of a machine learning model since the conventional machine learning is performed mainly based on data belonging to a majority class due to a class imbalance.
  • To this end, in the example embodiment, the following data augmentation may be performed. Referring to FIG. 5 , the modified GAN according to the example embodiment may include a generator (indicated with ‘G’) configured to generate vital signs of fatal symptoms similar to reality and a discriminator (indicated with ‘D’) configured to discriminate true data from generated data.
  • In detail, according to the paper regarding the conventional GAN, non-patent document 1: [Goodfellow, Ian J.; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). “Generative Adversarial Networks”], a generator is configured to generate data similar to true data to deceive a discriminator, such that the discriminator may determine the similar data as the true data, and the discriminator is configured to discriminate the true data from the generated similar data. During progress of learning by the GAN, each of the generator and the discriminator updates a network weight to achieve each corresponding purpose. Accordingly, after sufficient learning, the generator may generate data similar to true data and a discrimination rate by the discriminator may converge theoretically to 0.5.
  • Accordingly, since the generator sufficiently trained through the conventional GAN may generate data close to true data, the aforementioned class imbalance issue of data may be overcome by using the similar data generated by the generator of the conventional GAN as learning data for training the machine learning model.
  • The conventional GAN focuses on learning simply depending whether the input data is true data or fake data. Thus, if the input data may belong to various labels, the conventional GAN may not readily verify a label corresponding to the input data among the various labels.
  • To outperform the above limit, the modified GAN modified from the conventional GAN is used herein. Therefore, various kinds (i.e., kinds classified by labels) of data may be generated and determined by further considering information of a label related to data using the modified GAN.
  • In detail, dissimilar to the conventional GAN, the generator of the modified GAN may include sub-generators (indicated with ‘G_label1’, ‘G_label2’, and ‘G_label3’ in FIG. 5 ) each configured to generate similar data corresponding to a plurality of labels, respectively. Each sub-generator generates similar data belonging to a label corresponding to the sub-generator.
  • Also, dissimilar to the conventional GAN, the discriminator of the modified GAN predicts and discriminates a label corresponding to data to be determined by the discriminator instead of simply determining whether the data to be determined by the discriminator is true data or fake data.
  • For example, such labels may include ‘cardiac arrest group’, ‘sepsis group’, ‘normal’, and the like. Therefore, the modified GAN according to example embodiment may support data generated by the generator to become similar data close to true data, and may support the discriminator to verify a label corresponding to the true data or the similar data input to the discriminator, such that the generator may generate a specific label through the sub-generator.
  • That is, the modified GAN according to the example embodiment may be trained based on true data and a kind of a label corresponding to the true data and accordingly, may generate various kinds of similar data classified by labels to be close to true data. In particular, with respect to a specific label having a relatively small quality of true data, the modified GAN may replenish an insufficient quantity by generating similar data corresponding to the specific label and thereby solve a data imbalance issue about the specific label.
  • Hereinafter, a data classification method according to an example embodiment is described based on the aforementioned description. The data classification method includes operation S100′ of acquiring, by the computing apparatus 200 through the communicator 210, or supporting another apparatus interacting with the computing apparatus 200 to acquire true data. For example, such true data may be a time series signal, however, without being limited thereto, any data to be classified may be included in the true data.
  • The data classification method further includes operation S200′ of training, by the computing apparatus 200 through the processor 210, or allowing the other apparatus interacting through the communicator 210 to train the generator and the discriminator of the modified GAN based on label information of a label corresponding to the acquired true data and the true data. A difference between the modified GAN and the conventional GAN is described above and further description is omitted.
  • The data classification method further includes operation S300′ of training, by the computing apparatus 200 through the processor 220, or allowing the other apparatus to train the machine learning model by generating the similar data using the trained modified GAN and by using (i) the true data and the similar data or (ii) the similar data as learning data of a predetermined machine learning model for classification.
  • For example, a convolutional neural network (CNN), a recurrent neural network (RNN), and the like may be included in the machine learning model. However, it is provided as an example only, which may be understood by those skilled in the art.
  • Also, the data classification method further includes operation S400′ of, in response to acquiring data to be classified by the communicator 210 of the computing apparatus 200, generating, by the computing apparatus 200 through the processor 220, or supporting the other apparatus to generate classification information of the data to be classified by classifying the data to be classified based on the machine learning model.
  • Further, the data classification method may further include operation S500′ of providing, by the computing apparatus 200 through the processor 220, or supporting the other apparatus to provide the classification information to an external entity. Here, the external entity may include a user of the computing apparatus, an administrator, and the like. In addition thereto, it should be understood that any entity having a right to acquire the classification information may be included.
  • Also, the data classification method may further include operation S600′ of updating, by the computing apparatus 200, or supporting the other apparatus to update the machine learning model based on evaluation information about an accuracy of the classification information.
  • The aforementioned data classification method according to the example embodiment may provide classification information about data to be classified based on a predetermined machine learning model. Therefore, in the case of using evaluation information about an accuracy of classification information as retraining data, a further accurate prediction may be performed. Accordingly, the data classification method may further include operation S600′ of updating, by the computing apparatus 200 through the processor 220, or supporting the other apparatus to update at least one of the machine learning model and the modified GAN based on evaluation information about the classification information.
  • For example, operation S600′ may include an operation of directly updating, by the computing apparatus 200, or supporting the other apparatus to update the machine learning model based on the evaluation information about the classification information or an operation of indirectly updating or supporting the other apparatus to update the machine learning model using data generated by the generator of the modified GAN by training the modified GAN based on the evaluation information about the prediction information.
  • Here, learning data not used for previous learning may be further considered and an error found in the previous learning may be corrected. Therefore, the accuracy of the modified GAN may be enhanced. As data accumulates, the classification performance of the machine learning model continues to improve. Also, according to example embodiments, errors in true data being provided may be significantly reduced and a class imbalance of learning data used to train the machine learning model may be solved. Therefore, the reliability of the trained machine learning model may be improved.
  • According to the example embodiments, although a class imbalance of learning data is high in machine learning, it is possible to enhance the accuracy of a classification model by existing machine learning through a data augmentation of generating learning data of a minority class similar to true data.
  • Based on the description of the example embodiments, those skilled in the art may clearly understand that the disclosure may be implemented using a combination of software and hardware or hardware alone. Targets of technical solutions of the disclosure or portions contributing to the arts may be configured in a form of program instructions performed by various computer components and stored in non-transitory computer-readable recording media. The media may include, alone or in combination with the program instructions, data files, data structures, and the like. The program instructions recorded in the media may be specially designed and configured for the example embodiments, or may be known to those skilled in the art of computer software. Examples of the media may include magnetic media such as hard disks, floppy disks, and magnetic tapes; optical media such as CD-ROM discs and DVDs; magneto-optical media such as floptical disks; and hardware devices that are specially configured to store and perform program instructions, such as ROM, RAM, flash memory, and the like. Examples of program instructions may include a machine code, such as produced by a compiler and higher language code that may be executed by a computer using an interpreter.
  • The hardware devices may be configured to operate as at least one software module to perform processing of the example embodiments, or vice versa. The hardware devices may include a processor, such as a CPU or a GPU, configured to combine with a memory, such as ROM/RAM, for storing program instructions and to execute the instructions stored in the memory, and may include a communicator configured to transmit and receive signals to and from an external apparatus. Further, the hardware devices may include a keyboard, a mouse, and other external input devices for receiving instructions produced by developers.
  • According to some example embodiments, compared to an existing method of determining the risk of a patient depending on experience or knowledge of medical specialists, it is possible to further quickly and easily predict fatal symptoms, such as cardiac arrest, sepsis, etc.
  • Also, according to some example embodiments, compared to an existing method of assigning a rule-based score, it is possible to further decrease false negatives and false positives in predicting fatal symptoms.
  • Also, according to some example embodiments, it is possible to innovate a workflow in a medical field by enabling a continuous prediction in time-series clinical processes.
  • Also, according to some example embodiments, a prediction performance may be continuously improved by using a method of this disclosure.
  • Also, according to some example embodiments, it is possible to outperform a class imbalance of learning data that is an issue in a classification model by existing machine learning, that is, in training a machine learning model for classification, through a data augmentation of learning data.
  • For example, a data classification method using data augmentation according to example embodiments may apply to many medical determination situations and thus, may enhance a low accuracy of a classification model caused by minority data related to cardiac arrest, sepsis, etc., since most subjects are subjects not showing corresponding symptoms.
  • While this disclosure is described with reference to specific matters such as components, some example embodiments, and drawings, they are merely provided to help general understanding of the disclosure and this disclosure is not limited to the example embodiments. It will be apparent to those skilled in the art that various alternations and modifications in forms and details may be made from the example embodiments.
  • Therefore, the scope of this disclosure is not defined by the example embodiments, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
  • The equally or equivalent modifications may include, for example, a logically equivalent method that may achieve the same result as one acquired by implementing the method according to this disclosure.

Claims (15)

What is claimed is:
1. A method for a computing apparatus operating based on a machine learning model, the method comprising:
acquiring first vital signs of a subject during a first time duration;
predicting normal vital information related to a normal state of the subject based on the first vital signs;
converting the acquired first vital signs into second vital signs that are individualized to fit a characteristic of the subject based on the normal vital information;
generating individuation data based on the individualized second vital signs;
generating analysis information about a prediction of fatal symptoms from the individuation data based on the machine learning model; and
generating the prediction result which is determined by a result of predicting occurrence of the fatal symptoms during a predetermined duration based on the analysis information.
2. The method of claim 1, wherein the normal vital information includes a characteristic value for the normal state of the subject predicted based on the first vital signs.
3. The method of claim 2, wherein the second vital signs are converted based on a difference between the characteristic value and the first vital signs.
4. The method of claim 3, wherein the characteristic value is calculated based on an average value of at least one first vital signal corresponding to a second time duration among the first vital signs, and
wherein the second time duration is a partial time duration predicted that the subject is in the normal state in the first time duration.
5. The method of claim 1, further comprising:
updating the machine learning model based on evaluation information about the prediction result.
6. The method of claim 1, wherein the first vital signs are acquired from an electronic medical record (EMR) of the subject.
7. The method of claim 1, wherein the individuation data is generated by;
generating individuation data for the subject by calculating a standard score (z-score) for the second vital signs with reference to an average and variance of vital signs of other subjects.
8. The method of claim 1, wherein the machine learning model comprises an analysis model comprising a recurrent neural network model as an analysis model,
the recurrent neural network model follows st=f(Uxt+Wst-1) and y=g(Vst),
where the xt denotes the individuation data that is an input vector at a point in time t or a value processed from the individuation data,
the st denotes a hidden state corresponding to a memory of the recurrent neural network model at the point in time t,
the st-1 denotes the hidden state at the point in time t−1,
the U, V, and W denote neural network parameters shared equally across all the points in times of the recurrent neural network model,
the f denotes a predetermined first activation function that is selected to calculate the hidden state,
the y denotes an output layer that is a latent feature according to the recurrent neural network model at the point in time t as the analysis information, and
the g denotes a predetermined second activation function that is selected to calculate the output layer, and
the machine learning model further comprises a prediction model comprising at least one fully connected layer for calculating an occurrence probability of the fatal symptoms from the output layer.
9. The method of claim 1, wherein the fatal symptoms comprise an occurrence of cardiac arrest or sepsis.
10. The method of claim 1, further comprising:
providing the generated prediction result to an external entity.
11. A non-transitory computer-readable storage medium storing a program instructions that is executable by a computer to perform the method of claim 1.
12. A computing apparatus operating based on a machine learning model, the computing apparatus comprising:
a communicator; and
a processor configured to communicate through the communicator
wherein the processor is configured to:
acquire first vital signs of a subject during a first time duration,
predict normal vital information related to a normal state of the subject based on the first vital signs
convert the acquired first vital signs into second vital signs that are individualized to fit a characteristic of the subject based on the normal vital information,
generate individuation data based on the individualized second vital signs,
generate analysis information about a prediction of the fatal symptoms from the individuation data based on the machine learning model, and
generate the prediction result determined by a result of predicting occurrence of the fatal symptoms during a duration based on the analysis information.
13. The apparatus of claim 12, wherein the processor is configured to update the machine learning model based on evaluation information about the prediction result.
14. The apparatus of claim 12, wherein the machine learning model comprises:
an analysis model comprising a recurrent neural network model; and
a prediction model,
wherein the recurrent neural network model follows st=f(Uxt+Wst-1) and y=g(Vst),
where the xt denotes the individuation data that is an input vector at the point in time t or a value processed from the individuation data,
the st denotes a hidden state corresponding to a memory of the recurrent neural network model at the point in time t,
the st-1 denotes the hidden state at the point in time t−1,
the U, V, and W denote neural network parameters shared equally across all the points in times of the recurrent neural network model,
the f denotes a predetermined first activation function that is selected to calculate the hidden state,
the y denotes an output layer that is a latent feature according to the recurrent neural network model at the point in time t as the analysis information, and
the g denotes a predetermined second activation function that is selected to calculate the output layer,
wherein the prediction model comprising at least one fully connected layer for calculating an occurrence probability of the fatal symptoms from the output layer.
15. The apparatus of claim 12, wherein the fatal symptoms comprise an occurrence of cardiac arrest or sepsis.
US18/218,016 2017-08-11 2023-07-04 Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same Pending US20230352164A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/218,016 US20230352164A1 (en) 2017-08-11 2023-07-04 Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
KR1020170102265A KR101841222B1 (en) 2017-08-11 2017-08-11 Method for generating prediction results for early prediction of fatal symptoms of a subject and apparatus using the same
KR10-2017-0102265 2017-08-11
KR10-2017-0106529 2017-08-23
KR1020170106529A KR101843066B1 (en) 2017-08-23 2017-08-23 Method for classifying data via data augmentation of the data for machine-learning and apparatus using the same
PCT/KR2018/008918 WO2019031794A1 (en) 2017-08-11 2018-08-07 Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same
US202016638250A 2020-02-11 2020-02-11
US18/218,016 US20230352164A1 (en) 2017-08-11 2023-07-04 Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US16/638,250 Continuation US11735317B2 (en) 2017-08-11 2018-08-07 Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same
PCT/KR2018/008918 Continuation WO2019031794A1 (en) 2017-08-11 2018-08-07 Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same

Publications (1)

Publication Number Publication Date
US20230352164A1 true US20230352164A1 (en) 2023-11-02

Family

ID=65271422

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/638,250 Active 2039-12-12 US11735317B2 (en) 2017-08-11 2018-08-07 Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same
US18/218,016 Pending US20230352164A1 (en) 2017-08-11 2023-07-04 Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/638,250 Active 2039-12-12 US11735317B2 (en) 2017-08-11 2018-08-07 Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same

Country Status (3)

Country Link
US (2) US11735317B2 (en)
JP (2) JP6931897B2 (en)
WO (1) WO2019031794A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11593716B2 (en) 2019-04-11 2023-02-28 International Business Machines Corporation Enhanced ensemble model diversity and learning
JP7239109B2 (en) * 2019-06-12 2023-03-14 株式会社モリタ製作所 Estimation Device, Estimation System, Method of Operating Estimation Device, and Estimation Program
US11507803B2 (en) 2020-01-31 2022-11-22 Rohde & Schwarz Gmbh & Co. Kg System for generating synthetic digital data for data multiplication
US11551061B2 (en) 2020-01-31 2023-01-10 Rohde & Schwarz Gmbh & Co. Kg System for generating synthetic digital data of multiple sources
JP7372614B2 (en) 2020-05-14 2023-11-01 学校法人早稲田大学 Information processing systems and programs
CN112183376A (en) * 2020-09-29 2021-01-05 中国人民解放军军事科学院国防科技创新研究院 Deep learning network architecture searching method for EEG signal classification task
KR102559047B1 (en) * 2021-01-28 2023-07-21 사회복지법인 삼성생명공익재단 The system and method of symptom worsening prediction
KR102623020B1 (en) * 2023-09-11 2024-01-10 주식회사 슈파스 Method, computing device and computer program for early predicting septic shock through bio-data analysis based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130185097A1 (en) * 2010-09-07 2013-07-18 The Board Of Trustees Of The Leland Stanford Junior University Medical scoring systems and methods
US20130237776A1 (en) * 2010-03-15 2013-09-12 Nanyang Technological University Method of predicting acute cardiopulmonary events and survivability of a patient
US20180004901A1 (en) * 2016-06-30 2018-01-04 General Electric Company Systems and methods for holistic analysis of medical conditions

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3050624B2 (en) 1991-04-12 2000-06-12 三菱電機株式会社 Correlation data collection system
US7761309B2 (en) * 2002-03-22 2010-07-20 Thinksharp, Inc. Method and system of mass and multiple casualty triage
US20050182655A1 (en) * 2003-09-02 2005-08-18 Qcmetrix, Inc. System and methods to collect, store, analyze, report, and present data
US20110202486A1 (en) * 2009-07-21 2011-08-18 Glenn Fung Healthcare Information Technology System for Predicting Development of Cardiovascular Conditions
KR20130082551A (en) * 2011-12-08 2013-07-22 한국전자통신연구원 Clinical data analysis apparatus and clinical data analysis method thereof
KR20140108417A (en) 2013-02-27 2014-09-11 김민준 Health diagnosis system using image information
US10262108B2 (en) * 2013-03-04 2019-04-16 Board Of Regents Of The University Of Texas System System and method for determining triage categories
EP3010401A4 (en) * 2013-06-20 2017-03-15 University Of Virginia Patent Foundation Multidimensional time series entrainment system, method and computer readable medium
US20160364545A1 (en) * 2015-06-15 2016-12-15 Dascena Expansion And Contraction Around Physiological Time-Series Trajectory For Current And Future Patient Condition Determination
US20160364536A1 (en) * 2015-06-15 2016-12-15 Dascena Diagnostic support systems using machine learning techniques
KR20170061222A (en) * 2015-11-25 2017-06-05 한국전자통신연구원 The method for prediction health data value through generation of health data pattern and the apparatus thereof
JP6828055B2 (en) * 2016-05-04 2021-02-10 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Estimating and using clinician assessment of patient urgency
WO2018219809A1 (en) * 2017-05-30 2018-12-06 Koninklijke Philips N.V. System and method for providing a layer-based presentation of a model-generated patient-related prediction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130237776A1 (en) * 2010-03-15 2013-09-12 Nanyang Technological University Method of predicting acute cardiopulmonary events and survivability of a patient
US20130185097A1 (en) * 2010-09-07 2013-07-18 The Board Of Trustees Of The Leland Stanford Junior University Medical scoring systems and methods
US20180004901A1 (en) * 2016-06-30 2018-01-04 General Electric Company Systems and methods for holistic analysis of medical conditions

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Gopalswamy et al. "Deep recurrent neural networks for predicting intraoperative and postoperative outcomes and trends." 2017 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI). IEEE, Feb 2017. (Year: 2017) *

Also Published As

Publication number Publication date
JP6931897B2 (en) 2021-09-08
US20200176117A1 (en) 2020-06-04
JP2020532025A (en) 2020-11-05
US11735317B2 (en) 2023-08-22
JP7307926B2 (en) 2023-07-13
WO2019031794A1 (en) 2019-02-14
JP2021177429A (en) 2021-11-11

Similar Documents

Publication Publication Date Title
US20230352164A1 (en) Method for generating prediction result for predicting occurrence of fatal symptoms of subject in advance and device using same
Sivapalan et al. ANNet: A lightweight neural network for ECG anomaly detection in IoT edge sensors
Gjoreski et al. Machine learning and end-to-end deep learning for the detection of chronic heart failure from heart sounds
Raju et al. Smart heart disease prediction system with IoT and fog computing sectors enabled by cascaded deep learning model
US20200337580A1 (en) Time series data learning and analysis method using artificial intelligence
KR101841222B1 (en) Method for generating prediction results for early prediction of fatal symptoms of a subject and apparatus using the same
Dbritto et al. Comparative analysis of accuracy on heart disease prediction using classification methods
Rongjun et al. Collaborative extreme learning machine with a confidence interval for P2P learning in healthcare
Malibari An efficient IoT-Artificial intelligence-based disease prediction using lightweight CNN in healthcare system
Khalsa et al. Artificial intelligence and cardiac surgery during COVID‐19 era
KR102421172B1 (en) Smart Healthcare Monitoring System and Method for Heart Disease Prediction Based On Ensemble Deep Learning and Feature Fusion
Mijwil et al. A scoping review of machine learning techniques and their utilisation in predicting heart diseases
Rout et al. Early detection of sepsis using LSTM neural network with electronic health record
Rahman et al. Automated detection of cardiac arrhythmia based on a hybrid CNN-LSTM network
Nam et al. Selective prediction with long short-term memory using unit-wise batch standardization for time series health data sets: algorithm development and validation
Yu et al. Work-in-progress: On the feasibility of lightweight scheme of real-time atrial fibrillation detection using deep learning
KR102049829B1 (en) Method for classifying subject according to criticality thereof by assessing the criticality and apparatus using the same
Bhukya et al. Detection and classification of cardiac arrhythmia using artificial intelligence
Kumar Sensitivity, specificity, generalizability, and reusability aspirations for machine learning (ML) models in MHealth
Jha Artificial intelligence and Internet of Things for sustainable healthcare system
El Mir et al. The state of the art of using artificial intelligence for disease identification and diagnosis in healthcare
WO2024039218A1 (en) Method, computer program, and apparatus for augmenting bio-signal data
Sathiyaraj et al. Convergence of Big Data and Cognitive Computing in Healthcare
Venkatesh et al. An automatic diagnostic model for the detection and classification of cardiovascular diseases based on swarm intelligence technique
Nandi et al. Implementation of Ensemble Algorithm with Data Pruning on Qualcomm Snapdragon 820c

Legal Events

Date Code Title Description
AS Assignment

Owner name: HYEWON MEDICAL FOUNDATION, KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, YEONGNAM;LEE, YEHA;KWON, JOONMYOUNG;SIGNING DATES FROM 20200130 TO 20200131;REEL/FRAME:064142/0604

Owner name: VUNO, INC., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, YEONGNAM;LEE, YEHA;KWON, JOONMYOUNG;SIGNING DATES FROM 20200130 TO 20200131;REEL/FRAME:064142/0604

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED