WO2020242239A1 - Système de prise en charge de diagnostic basé sur l'intelligence artificielle utilisant un algorithme d'apprentissage d'ensemble - Google Patents

Système de prise en charge de diagnostic basé sur l'intelligence artificielle utilisant un algorithme d'apprentissage d'ensemble Download PDF

Info

Publication number
WO2020242239A1
WO2020242239A1 PCT/KR2020/006971 KR2020006971W WO2020242239A1 WO 2020242239 A1 WO2020242239 A1 WO 2020242239A1 KR 2020006971 W KR2020006971 W KR 2020006971W WO 2020242239 A1 WO2020242239 A1 WO 2020242239A1
Authority
WO
WIPO (PCT)
Prior art keywords
lesion
image
learning
learning model
clinical data
Prior art date
Application number
PCT/KR2020/006971
Other languages
English (en)
Korean (ko)
Inventor
김원태
강신욱
이명재
김동민
남동연
Original Assignee
(주)제이엘케이
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)제이엘케이 filed Critical (주)제이엘케이
Publication of WO2020242239A1 publication Critical patent/WO2020242239A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present disclosure relates to a deep learning model learning technology, and more specifically, using a method and apparatus for learning about lesions based on medical images and clinical data, and a learning model built on the basis of medical images and clinical data. It is a method and apparatus for diagnosing lesions.
  • Deep learning is the learning of a very large amount of data, and when new data is input, the highest probability is selected based on the learning result.
  • Such deep learning can operate adaptively according to an image, and since feature factors are automatically found in the process of learning a model based on data, there is a trend of increasing attempts to utilize this in the field of artificial intelligence.
  • Certain lesions may show signs in a specific area of the body, but some specific lesions may appear complexly in various areas of the body, and changes in the body may also appear complex. Therefore, it is difficult to detect a disease or lesion simply by considering only symptoms or signs appearing in a specific area of the patient.
  • a disease such as systemic leukoplakia, which is a kind of rheumatic disease, may exhibit simultaneous and multiple symptoms throughout the body.
  • An object of the present disclosure is to provide a method and apparatus for learning lesion severity by comprehensively considering an image of a body and a biological change occurring in the body.
  • Another technical problem of the present disclosure is to provide a method and apparatus for diagnosing a lesion that predicts the severity of a lesion by comprehensively reflecting an image of a body and a biological change occurring in the body.
  • Another technical task of the present disclosure is to provide a method and apparatus for complexly learning the progress of the disease, the relationship between diseases, the metastasis state, etc. by comprehensively reflecting and learning various symptoms or signs expressed in the body. will be.
  • Another technical problem of the present disclosure is to provide a diagnostic method and apparatus capable of comprehensively predicting the progress of the disease, the relationship between the diseases, the metastasis state, etc. by comprehensively reflecting the symptoms or signs that are variously expressed in the body. will be.
  • an apparatus for learning lesion integration includes an image-based learning unit that receives a medical image and learns at least one image-based learning model that outputs an image-based lesion prediction result, and a clinical data-based learning unit that receives clinical data and outputs a clinical data-based lesion prediction result.
  • Ensemble learning of at least one clinical data-based learning unit that learns the learning model of, and an integrated learning model that receives the image-based lesion prediction result and the clinical data-based lesion prediction result, and outputs the final lesion prediction result. ) May include an integrated learning unit.
  • a method for integrating lesion learning includes a process of learning at least one image-based learning model that receives a medical image and outputs an image-based lesion prediction result, receives a bio-signal, and outputs a bio-signal-based lesion prediction result corresponding to the bio-signal. At least one process of learning at least one bio-signal-based learning model, receiving at least one clinical information obtained through a clinical trial, and outputting a clinical data-based lesion prediction result corresponding to the at least one clinical information.
  • Ensemble an integrated learning model that receives the process of learning one clinical information-based learning model, the image-based lesion prediction result, the biosignal-based lesion prediction result, and the clinical data-based lesion prediction result, and outputs the final lesion prediction result. It may include a process of ensemble learning.
  • an apparatus for diagnosing a lesion may be provided.
  • the apparatus is an apparatus for diagnosing a lesion using a learning model obtained by learning the lesion, the apparatus comprising: an image-based prediction unit that outputs an image-based lesion prediction result corresponding to an input medical image using at least one image-based learning model; , Using a clinical data-based learning model, at least one clinical data-based prediction unit for outputting each clinical data-based lesion prediction result corresponding to the input clinical data, and the image-based lesion prediction result and the clinical data-based lesion It may include an integrated diagnosis unit that inputs the prediction result into the integrated learning model and checks the final lesion prediction result output through the integrated learning model.
  • a method for diagnosing a lesion may be provided.
  • the method is a method for diagnosing a lesion using a learning model that has learned the lesion, the process of outputting an image-based lesion prediction result corresponding to an input medical image using an image-based learning model, and a biosignal-based learning Using the model, the process of confirming the result of physiological signal-based lesion prediction corresponding to the input of the physiological signal, and using a clinical information-based learning model, based on clinical data corresponding to at least one clinical information acquired through a clinical trial
  • the process of confirming the lesion prediction result, the image-based lesion prediction result, the biosignal-based lesion prediction result, and the clinical data-based lesion prediction result are input into an integrated learning model, and the final lesion prediction result output through the integrated learning model. It may include a process of verifying.
  • a method and an apparatus for learning a lesion severity by comprehensively considering an image of a body and a biological change occurring in the body may be provided.
  • a method and apparatus for predicting a lesion severity by comprehensively considering an image of a body and a biological change occurring in the body may be provided.
  • a method and an apparatus for complexly learning the progress of a disease, a relationship between diseases, a metastasis state, etc. can be provided by learning by comprehensively reflecting and learning various symptoms or signs expressed in the body. have.
  • a method and apparatus capable of comprehensively predicting the progress of a disease, a relationship between diseases, a metastasis state, etc. may be provided by comprehensively reflecting various symptoms or symptoms expressed in the body. .
  • the diagnosis result is predicted through a complex learning model of disease progression, relationship between diseases, and metastasis based on a large amount of data on symptoms or signs that are variously expressed in the body. By doing so, it is possible to derive a diagnosis result with relatively high reliability compared to the result of diagnosis or determination based on the experience value.
  • FIG. 1 is a block diagram showing the configuration of a lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating a learning operation of a learning model provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 3 is a block diagram showing a detailed configuration of an image-based learning unit included in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 4 is a block diagram illustrating a detailed configuration of a first clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 5 is a block diagram illustrating a detailed configuration of a second clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 6 is a block diagram showing the configuration of a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • FIG. 7 is a block diagram illustrating a detailed configuration of an image-based detection unit included in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a block diagram illustrating a detailed configuration of a first clinical data-based detection unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is a block diagram illustrating a detailed configuration of a second clinical data-based learning unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • FIG. 10 is a flowchart illustrating a procedure of a method for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 11 is a flowchart illustrating a procedure of a method for diagnosing a lesion according to an embodiment of the present disclosure.
  • FIG. 12 is a block diagram illustrating a method and apparatus for integrated learning of a lesion and a computing system for executing the method and apparatus for diagnosing a lesion according to an embodiment of the present disclosure.
  • first and second are used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise stated. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is a first component in another embodiment. It can also be called.
  • components that are distinguished from each other are intended to clearly describe each characteristic, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated into one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Therefore, even if not stated otherwise, such integrated or distributed embodiments are also included in the scope of the present disclosure.
  • components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to components described in the various embodiments are included in the scope of the present disclosure.
  • FIG. 1 is a block diagram showing the configuration of a lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • the integrated lesion learning apparatus 10 may include an image-based learning unit 11, at least one clinical data-based learning unit 13, and an integrated learning unit 15.
  • the image-based learning unit 11 may learn an image-based learning model for receiving a medical image and outputting a lesion prediction result.
  • the image-based learning unit 11 may include a plurality of image-based learning models, and a probability of developing a specific disease in a specific region included in the medical image may be set as a target variable of the image-based learning model.
  • the detailed configuration and operation of the image-based learning unit 11 will be described in detail with reference to FIG. 3 attached below.
  • medical images are images of the entire body or a specific diagnostic area through various imaging techniques, such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images.
  • imaging techniques such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images.
  • the clinical data-based learning unit 13 may learn a clinical data-based learning model that receives clinical data and outputs lesion prediction results.
  • the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like.
  • the clinical data-based learning model receives data detected from biological signals (e.g., ECG, PPG, EMG, etc.) or body fluids, urine, biopsy, etc. generated by the user's body, and the aforementioned clinical data
  • the probability of developing a specific disease may be set as an objective variable of a clinical data-based learning model.
  • the clinical data-based learning model may be trained to output a lesion prediction result corresponding to the above-described clinical data.
  • At least one of the aforementioned clinical data-based learning units 13 may be provided in the lesion integrated learning unit 10, and each clinical data-based learning unit 13 may include a clinical data-based learning model.
  • Clinical data-based learning models include logistic regression, multi-layer perceptron, stochastic gradient descent, bagging, random forest, decision tree, support vector machine, k-nearest neighbor, multinomial logistic regression, multi-layer perceptron, stochastic gradient descent, random Learning may be performed based on at least one method among forest, decision tree, linear regression, bayesian regression, and kernel ridge regression.
  • a machine learning algorithm used for a clinical data-based learning model is illustrated, but the present disclosure is not limited thereto, and various types of machine learning algorithms may be used in addition to the illustrated algorithm.
  • the clinical data-based learning unit 13 may be configured to be classified according to the type of input data.
  • the clinical data-based learning unit 13 includes a first clinical data-based learning unit 13-1 that learns biological signals (eg, ECG, PPG, EMG, etc.), and detected from body fluids, urine, biopsy, etc.
  • a second clinical data-based learning unit 13-2 for learning data may be included.
  • the clinical data-based learning unit 13 checks the type of input data, selects the learning units 13-1 and 13-2 corresponding to the identified type, and provides the clinical data.
  • a bio-signal eg, ECG, PPG, EMG, etc.
  • the clinical data-based learning unit 13 inputs bio-signals (eg, ECG, PPG, EMG, etc.) to the first clinical data-based learning unit 13-1, and from bodily fluids, urine, biopsy, etc.
  • a user interface may be provided so that the detected data can be input to the second clinical data-based learning unit 13-2.
  • the clinical data-based learning unit 13 sets clinical data suitable for the first clinical data-based learning unit 13-1 and the second clinical data-based learning unit 13-2 in the design stage, and the user An environment in which corresponding clinical data is input to the first and second clinical data-based learning units 13-1 and 13-2 may be provided through the interface.
  • the integrated learning unit 15 can receive the lesion prediction result output from the image-based learning unit 11 and the clinical data-based learning unit 13, and as an output corresponding thereto, the integrated learning unit 15 learns the final lesion prediction result. It can contain a learning model. In particular, the integrated learning unit 15 performs ensemble learning on a plurality of lesion prediction results provided from the image-based learning unit 11 and the clinical data-based learning unit 13 to obtain the final lesion prediction results. Configurable.
  • the lesion prediction results output from the image-based learning unit 11 and the clinical data-based learning unit 13 are set as input data of the integrated learning model, and are set when learning the image-based learning model and the clinical data-based learning model.
  • the same objective variable as the one used can be set as the output data of the integrated learning model.
  • the integrated learning model can build an ensemble model by learning weights between the outputs of the image-based learning model and the clinical data-based learning model.
  • FIG. 2 is a diagram illustrating a learning operation of a learning model provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • the integrated learning data set 200 may be configured, and data obtained by combining a plurality of integrated learning data sets 200 may be configured as integrated learning data.
  • the integrated learning data set 200 may be configured to include medical image data 210, first clinical data 220, and second clinical data 230.
  • the integrated learning data set 200 includes an image reading result 215 including the lesion and the severity of the lesion for a specific area in the medical image, the first clinical reading including the lesion and the severity of the lesion for the first clinical data It may be configured to include a result 225, and a second clinical reading result 235 including the lesion and the severity of the lesion for the second clinical data, and the image reading result 215 and the first clinical reading result 225 ), and the second clinical reading result 235 may be configured to correspond to the medical image data 210, the first clinical data 220, and the second clinical data 230, respectively.
  • the integrated learning data set 200 includes the final reading result data including the lesion and the severity of the lesion determined based on the medical image data 210, the first clinical data 220, and the second clinical data 230 ( 250) can be configured to include.
  • the image-based learning model 21 may perform learning by receiving medical image data 210 and receiving an image reading result 215 as a target variable.
  • the first clinical data-based learning model 22 may receive the first clinical data 220 and receive the first clinical reading result 225 as a target variable to perform learning.
  • the second clinical data-based learning model 23 may perform learning by receiving the second clinical data 230 and receiving the second clinical reading result 235 as a target variable.
  • the integrated learning model 25 receives the image reading result 215, the first clinical reading result 225, the second clinical reading result 235, etc., and provides the final reading result data 250 as a target variable. Take it, you can perform learning. For example, the integrated learning model 25 generates each of the image readout result 215, the first clinical readout result 225, and the second clinical readout result 235 as one feature vector, and targets each of the feature vectors. You can create an ensemble model that trains a class for a variable.
  • the image-based learning model (21), the first clinical data-based learning model (22), the second clinical data-based learning model (23), and the integrated learning model (25) are a combination of a plurality of integrated learning data sets (200).
  • the learning model can be built by performing learning on the integrated learning data.
  • a second data set 270 may be configured.
  • the second data set 270 may be a validation set used for verification, or a new data set having information on the same characteristics and target variables as the first data set 260.
  • the present disclosure is not limited thereto.
  • learning about the image-based learning model 21, the first clinical data-based learning model 22, and the second clinical data-based learning model 23 is performed using the first data set 260, and By performing the learning of the integrated learning model 25 using the two data set 270, the learning of the integrated learning model 25 may be separately performed.
  • FIG. 3 is a block diagram showing a detailed configuration of an image-based learning unit included in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • the image-based learning unit 30 includes a lesion area detection unit 31, an image-based severity learning unit 32a, 32b, and 32c, and a plurality of image-based severity learning units 32a, 32b, and 32c. It may include an image-based integrated learning unit 35 for ensemble-learning each output image-based lesion prediction result.
  • the plurality of image-based severity learning units 32a, 32b, and 32c each have a different learning structure.
  • the lesion area detection unit 31 can receive a medical image (hereinafter referred to as'original medical image') photographing the user's body, and a medical image extracted from the original medical image (hereinafter,'' Lesion area image) can be detected.
  • the lesion area detection unit 31 may detect the lesion area image and provide the detection of the lesion area image as an input of the image-based severity learning units 32a, 32b, and 32c.
  • the operation of extracting the lesion area image from the original medical image by the lesion area detection unit 31 may be performed based on a convolutional neural network (CNN) technique or a pooling technique.
  • the lesion region detection unit 31 may include a lesion region detection learning model 310 through learning that takes an original medical image as an input and outputs the lesion region image.
  • the lesion region detection unit 31 inputs the original medical image into the lesion region detection learning model 310 and generates a lesion region image corresponding thereto through the lesion region detection learning model 310. Can be detected.
  • medical images may be selectively used based on the characteristics of each body organ or diagnosis region, or lesions present in the body organ or diagnosis region.
  • the image-based learning unit 30 may use a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, and the like as an input of the lesion region detection unit 31.
  • T2 T2-weighted
  • ADC apparent diffusion coefficients
  • the image-based learning unit 30 may use a STIR image, a T1 image, a T1 with Agents image, and a T2 image as an input of the lesion region detection unit 31.
  • the image-based learning unit 30 may use a T1 image, a T2 image, or a FLAIR as an input to the lesion region detection unit 31.
  • the image-based severity learning unit (32a, 32b, 32c) can learn the image-based severity learning model (320a, 320b, 320c), can receive a lesion region image as an analysis target image for performing the learning.
  • Learning of the image-based severity learning models 320a, 320b, and 320c may be performed by labeling a specific object included in the lesion area image or a severity level for a specific area.
  • the image-based severity learning unit (32a, 32b, 32c) is based on a convolutional neural network (CNN) technique or a pooling technique, for the image-based severity learning model (320a, 320b, 320c). Learning can be carried out.
  • CNN convolutional neural network
  • a pooling technique for the image-based severity learning model (320a, 320b, 320c). Learning can be carried out.
  • the image-based severity learning models 320a, 320b, and 320c may analyze an input image to extract features of an image.
  • the feature may be a local feature for each area of the image.
  • the image-based severity learning models 320a, 320b, and 320c may extract features of an input image using a general convolutional neural network (CNN) technique or a pooling technique.
  • the pooling technique may include at least one of a max pooling technique and an average pooling technique.
  • the pooling technique referred to in the present disclosure is not limited to the max pooling technique or the average pooling technique, and includes any technique for obtaining a representative value of an image region of a predetermined size.
  • the representative value used in the pooling technique may be at least one of a variance value, a standard deviation value, a mean value, a most frequent value, a minimum value, and a weighted average value, in addition to the maximum value and the average value.
  • the convolutional neural network of the present disclosure may be used to extract "features" such as a border, a line color, and the like from input data (images), and may include a plurality of layers. Each layer may receive input data and process the input data of the layer to generate output data.
  • the convolutional neural network may output an input image or a feature map generated by convolving an input feature map with filter kernels as output data.
  • the initial layers of the convolutional neural network can be operated to extract low-level features such as edges or gradients from the input.
  • the next layers of the neural network can gradually extract more complex features such as eyes, nose, etc.
  • the convolutional neural network may include a pooling layer in which a pooling operation is performed in addition to a convolutional layer in which a convolution operation is performed.
  • the pooling technique is a technique used to reduce the spatial size of data in the pooling layer.
  • the pooling technique includes a max pooling technique that selects the maximum value in a corresponding area and an average pooling technique that selects an average value of the area.
  • the max pooling technique is generally used. do.
  • the pooling window size and interval (stride) are set to the same value.
  • the stride refers to an interval to be moved when a filter is applied to input data, that is, an interval to which the filter is moved, and stride may also be used to adjust the size of output data.
  • the image-based severity learning models (320a, 320b, 320c) may be configured to build learning models of different structures, and the image-based lesion prediction results (321a, 320c) based on the learning models (320a, 320b, 320c), respectively. 321b, 321c) may be constructed to be output. Further, the image-based lesion prediction results 321a, 321b, and 321c provided by the learning models 320a, 320b, and 320c, respectively, may be configured to be provided as an input of the image-based integrated learning unit 35.
  • the image-based integrated learning unit 35 receives the image-based lesion prediction results 321a, 321b, 321c, and learns the image-based integrated lesion prediction result as an output corresponding thereto. 350 may be included.
  • the image-based integrated learning unit 35 targets a plurality of image-based lesion prediction results (321a, 321b, 321c) provided from the image-based severity learning models (320a, 320b, 320c) ensemble learning (Ensemble learning). Can be performed to construct an image-based integrated lesion prediction result.
  • clinical data is detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid, urine, biopsy, etc. generated by the user's body.
  • a biosignal eg, ECG, PPG, EMG, etc.
  • types of clinical data may be variously changed.
  • the configuration of the clinical data-based learning unit is exemplified based on the type of clinical data described above, but the configuration of the clinical data-based learning unit may be variously changed according to the type of clinical data.
  • the first clinical data-based learning unit exemplifies a configuration unit that receives a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change and performs learning of a learning model
  • the second clinical data-based learning unit exemplifies a component that performs learning of a learning model by receiving data detected from a body fluid, urine, biopsy, etc. generated by a user's body.
  • FIG. 4 is a block diagram illustrating a detailed configuration of a first clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • the first clinical data-based learning unit 40 includes a noise filter unit 41 that removes noise from an input biosignal (eg, ECG, PPG, EMG, etc.), and a noise-removed biosignal. It may include a diagnostic section extraction unit 42 for extracting a diagnostic section to be used for learning or detection.
  • a noise filter unit 41 that removes noise from an input biosignal (eg, ECG, PPG, EMG, etc.)
  • a diagnostic section extraction unit 42 for extracting a diagnostic section to be used for learning or detection.
  • the first clinical data-based learning unit 40 may include a lesion signal learning unit 43.
  • the lesion signal learning unit 43 is a first clinical data-based learning model (430-1, 430-2, ...430) that learns by using the biosignal of the diagnosis section as input data and using the lesion severity as a target variable. -n) may be included.
  • the first clinical data-based learning model (430-1, 430-2, ...430-n) is a recurrent neural network (RNN). ), it is possible to perform learning on the first clinical data.
  • RNN recurrent neural network
  • FIG. 5 is a block diagram illustrating a detailed configuration of a second clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • the second clinical data-based learning unit 50 may include a data normalization unit 51 and a lesion data learning unit 52.
  • the second clinical data detected from bodily fluids, urine, biopsy, etc. generated by the user's body may be configured in various forms, and each may represent different values. Accordingly, the data normalization unit 51 may normalize the second clinical data configured in various forms.
  • the lesion data learning unit 52 may include a second clinical data-based learning model 520 for learning by using the normalized second clinical data as input data and using the lesion severity as a target variable.
  • the second clinical data-based learning model 520 may perform learning on the second clinical data based on a Feed-Foward Neural Nework (FFNN) method.
  • FFNN Feed-Foward Neural Nework
  • FIG. 6 is a block diagram showing the configuration of a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • the lesion diagnosis apparatus 60 may include an image-based detection unit 61, at least one clinical data-based detection unit 63, and an integrated diagnosis unit 65.
  • the image-based detection unit 61 may include at least one image-based learning model 610 that receives a medical image and outputs an image-based lesion prediction result corresponding thereto, and includes at least one image-based learning model 610. Through this, the probability of developing a specific disease in a specific area included in the medical image can be output as a lesion prediction result.
  • medical images are images of the entire body or a specific diagnostic area through various imaging techniques, such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images.
  • imaging techniques such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images.
  • imaging techniques such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images.
  • T2 T2-weighted
  • ADC apparatus diffusion coefficients
  • the clinical data-based detection unit 63 may include clinical data-based learning models 630-1 and 630-2 for receiving clinical data and outputting lesion prediction results.
  • the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like.
  • the clinical data-based learning models 630-1 and 630-2 are detected from biosignals (eg, ECG, PPG, EMG, etc.) or body fluids, urine, biopsy, etc. generated by the user's body.
  • the model may be trained to receive data and output a probability of developing a specific disease in response to the above-described clinical data as a lesion prediction result.
  • At least one of the above-described clinical data-based detection unit 63 may be provided in the lesion diagnosis device 60, and each clinical data-based detection unit 63 includes clinical data-based learning models 630-1 and 630-2. Each can be included.
  • the clinical data-based detection unit 63 may be configured to be classified according to the type of input data.
  • the clinical data-based detection unit 63 includes a first clinical data-based detection unit 63-1 that inputs a biological signal (eg, ECG, PPG, EMG, etc.), and data detected from body fluids, urine, biopsy, etc.
  • a second clinical data-based detection unit 63-2 may be included as an input.
  • the clinical data-based detection unit 63 may be configured to check the type of input data, select the detection units 63-1 and 63-2 corresponding to the identified type, and provide the corresponding clinical data.
  • biological signals e.g., ECG, PPG, EMG, etc.
  • data detected from bodily fluids, urine, biopsy, etc. are the second clinical data-based detection unit. It is also possible to provide a user interface so that the input can be entered as (63-2).
  • the integrated diagnosis unit 65 may receive a lesion prediction result output from the image-based detection unit 61 and the clinical data-based detection unit 63, and confirm the final lesion prediction result as an output corresponding thereto.
  • the integrated diagnosis unit 65 receives a plurality of lesion prediction results provided from the image-based detection unit 61 and the clinical data-based detection unit 63, and ensemble learning to provide a final lesion prediction result corresponding thereto.
  • the image-based learning model 610, clinical data-based learning models 630-1 and 630-2, and the integrated learning model 650 are learning models constructed by the lesion integrated learning device 10 of FIG. I can.
  • FIG. 7 is a block diagram illustrating a detailed configuration of an image-based detection unit included in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • the image-based detection unit 70 is output from the lesion area detection unit 71, the image-based severity detection units 72a, 72b, and 72c, and a plurality of image-based severity detection units 72a, 72b, and 72c, respectively. It may include an image-based integrated detection unit 75 that receives the image-based lesion prediction result and outputs an image-based integrated disease prediction result corresponding thereto.
  • the plurality of image-based severity detection units 72a, 72b, and 72c each have a different learning structure.
  • Medical images may be selectively used based on the characteristics of each body organ or diagnosis region or lesions present in the body organ or diagnosis region.
  • the image-based detection unit 70 may use a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, or the like as an input of the lesion region detection unit 71.
  • T2 T2-weighted
  • ADC apparent diffusion coefficients
  • the image-based detection unit 70 may use a STIR image, a T1 image, a T1 with Agents image, and a T2 image as an input of the lesion region detection unit 71.
  • the image-based detection unit 70 may use a T1 image, a T2 image, or a FLAIR as an input to the lesion region detection unit 71.
  • the lesion area detection unit 71 may include a lesion area detection learning model 710, and the lesion area detection learning model 710 is a medical image (hereinafter, referred to as' An original medical image') is received, and a medical image obtained by extracting a lesion area from the original medical image (hereinafter, referred to as a'lesion area image') may be detected.
  • the lesion region detection unit 71 may detect the lesion region image and provide the detection of the lesion region image as an input of the image-based severity detection units 72a, 72b, and 72c.
  • the operation of detecting the lesion area image from the original medical image by the lesion area detection unit 71 may be performed based on a convolutional neural network (CNN) technique or a pooling technique.
  • the lesion region detection unit 71 may include a lesion region detection learning model 710 that receives an original medical image as an input and outputs the lesion region image.
  • the image-based severity detection units 72a, 72b, 72c may include image-based severity learning models 720a, 720b, and 720c, and the image-based severity learning models 720a, 720b, and 720c are convolutional neural networks. , CNN) technique or a pooling technique.
  • the image-based severity learning models 720a, 720b, and 720c may be composed of learning models having different structures, and even when the same lesion area image is input, different image-based lesion prediction results 721a are respectively used by different learning models. , 721b, 721c) can be output.
  • the image-based lesion prediction results 721a, 721b, and 721c output as described above may be provided as an input of the image-based integrated detection unit 75.
  • the image-based integrated detection unit 75 may include an image-based integrated learning model 750 that receives image-based lesion prediction results 721a, 721b, and 721c and outputs an image-based integrated lesion prediction result.
  • the lesion area detection learning model 710, the image-based severity learning models 720a, 720b, and 720c, the image-based integrated learning model 750, etc. are to be constructed by the image-based learning unit 30 of FIG. I can.
  • FIG. 8 is a block diagram illustrating a detailed configuration of a first clinical data-based detection unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • the first clinical data-based detection unit 80 includes a noise filter unit 81 that removes noise from an input bio-signal (eg, ECG, PPG, EMG, etc.), and a lesion from the noise-removed bio-signal. It may include a diagnostic section extraction unit 82 for extracting a diagnostic section to be used for detection.
  • a noise filter unit 81 that removes noise from an input bio-signal (eg, ECG, PPG, EMG, etc.), and a lesion from the noise-removed bio-signal.
  • a diagnostic section extraction unit 82 for extracting a diagnostic section to be used for detection.
  • the first clinical data-based detection unit 80 may include a lesion signal detection unit 83.
  • the lesion signal detection unit 83 may include a first clinical data-based learning model 830-1, 830-2, ... 830-n that receives a biosignal of a diagnosis section and outputs a lesion severity.
  • the first clinical data-based learning model (830-1, 830-2, ... 830-n) is a recurrent neural network (RNN). ) May be a model learned based on the method.
  • first clinical data-based learning models 830-1, 830-2, ... 830-n may be models constructed by the clinical data-based learning unit of FIG. 4 described above.
  • FIG. 9 is a block diagram showing a detailed configuration of a second clinical data-based learning unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • the second clinical data-based learning unit 90 may include a data normalization unit 91 and a lesion data detection unit 92.
  • the second clinical data detected from bodily fluids, urine, biopsy, etc. generated by the user's body are composed of various types, and different values may be displayed according to each type. Accordingly, the data normalization unit 91 may normalize the second clinical data configured in various forms.
  • the lesion data detection unit 92 may include a second clinical data-based learning model 920 that receives normalized second clinical data and outputs a lesion severity.
  • the second clinical data-based learning model 920 may be a model built based on the FFNN (Feed-Foward Neural Nework) method, for example, built by the clinical data-based learning unit of FIG. It can be a model.
  • FFNN Field-Foward Neural Nework
  • FIG. 10 is a flowchart illustrating a procedure of a method for learning lesion integration according to an embodiment of the present disclosure.
  • the method for learning integrated lesions according to an embodiment of the present disclosure may be performed by the apparatus for learning integrated lesions according to an embodiment of the present disclosure described above.
  • the integrated lesion learning apparatus may learn an image-based learning model that receives a medical image and outputs a lesion prediction result.
  • the integrated lesion learning apparatus may include a plurality of image-based learning models, and a probability of developing a specific disease in a specific region included in a medical image may be set as a target variable of the image-based learning model.
  • the medical image is an image taken of the entire body or a specific diagnostic area through various imaging techniques, and is a T2 (T2-weighted) image, an ADC (apparent diffusion coefficients) image, a STIR image, a T1 image, and a T1 with agents image.
  • the integrated lesion learning apparatus may receive a medical image (hereinafter referred to as'original medical image') photographing a user's body, and a medical image obtained by extracting a diagnosis area from the original medical image (hereinafter, referred to as'lesion area image). ') can be detected.
  • the operation of extracting the lesion region image from the original medical image by the integrated lesion learning apparatus may be performed based on a convolutional neural network (CNN) technique or a pooling technique.
  • CNN convolutional neural network
  • the integrated lesion learning apparatus may include a lesion region detection learning model through learning that takes an original medical image as an input and outputs a lesion region image.
  • the lesion integrated learning apparatus may input the original medical image into the lesion region detection learning model and detect a lesion region image corresponding thereto through the lesion region detection learning model.
  • medical images may be selectively used based on the characteristics of each body organ or diagnosis region, or lesions present in the body organ or diagnosis region.
  • the integrated lesion learning apparatus may use a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, or the like as an input of the lesion region detection learning model.
  • T2 T2-weighted
  • ADC apparent diffusion coefficients
  • the apparatus for integrated lesion learning may use an STIR image, a T1 image, a T1 with Agents image, and a T2 image as an input of the lesion region detection learning model.
  • the integrated lesion learning apparatus may use a T1 image, a T2 image, or a FLAIR as an input of a lesion region detection learning model.
  • the integrated lesion learning device can learn an image-based severity learning model, which can receive a lesion region image as an image to be analyzed for performing the learning, and label the severity of a specific object or specific region included in the lesion region image.
  • image-based severity learning model can be learned.
  • the image-based severity learning model may be learned based on a convolutional neural network (CNN) technique or a pooling technique.
  • CNN convolutional neural network
  • the image-based severity learning model may extract features of an image by analyzing an input image.
  • the feature may be a local feature for each area of the image.
  • the image-based severity learning model can extract features of an input image using a general convolutional neural network (CNN) technique or a pooling technique.
  • the pooling technique may include at least one of a max pooling technique and an average pooling technique.
  • the pooling technique referred to in the present disclosure is not limited to the max pooling technique or the average pooling technique, and includes any technique for obtaining a representative value of an image region of a predetermined size.
  • the representative value used in the pooling technique may be at least one of a variance value, a standard deviation value, a mean value, a most frequent value, a minimum value, and a weighted average value, in addition to the maximum value and the average value.
  • the convolutional neural network of the present disclosure may be used to extract “features” such as a border, a line color, and the like from input data (image), and may include a plurality of layers. Each layer may receive input data and may generate output data by processing the input data of the layer.
  • the convolutional neural network may output an input image or a feature map generated by convolving an input feature map with filter kernels as output data.
  • the initial layers of the convolutional neural network can be operated to extract low-level features such as edges or gradients from the input.
  • the next layers of the neural network can gradually extract more complex features such as eyes, nose, etc.
  • the convolutional neural network may include a convolutional layer in which a convolution operation is performed, as well as a pooling layer in which a pooling operation is performed.
  • the pooling technique is a technique used to reduce the spatial size of data in the pooling layer.
  • the pooling technique includes a max pooling technique that selects a maximum value in a corresponding domain and an average pooling technique that selects an average value of the domain.
  • the max pooling technique is generally used. do.
  • the window size and interval (stride) of the pooling are set to the same value.
  • the stride refers to an interval to be moved when a filter is applied to the input data, that is, an interval to which the filter is moved, and the stride may also be used to adjust the size of the output data.
  • At least one image-based severity learning model may be configured such that learning models having different structures are constructed.
  • the integrated lesion learning device builds a learning model that constructs the image-based integrated lesion prediction result by performing ensemble learning on at least one image-based lesion prediction result provided from the image-based severity learning model. can do.
  • an ensemble model can be built by learning the weights for the results.
  • the integrated lesion learning apparatus may learn at least one clinical data-based learning model that receives clinical data and outputs a lesion prediction result.
  • the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like.
  • the clinical data-based learning model receives data detected from biological signals (e.g., ECG, PPG, EMG, etc.) or body fluids, urine, biopsy, etc.
  • the probability of developing a specific disease may be set as an objective variable of a clinical data-based learning model. Accordingly, the clinical data-based learning model may be trained to output a lesion prediction result corresponding to the above-described clinical data.
  • clinical data is data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change, or a body fluid generated by the user's body, urine, biopsy, etc.
  • a biosignal eg, ECG, PPG, EMG, etc.
  • the present disclosure is not limited thereto, and the type of clinical data may be variously changed.
  • the configuration of the clinical data-based learning unit is exemplified based on the type of clinical data described above, but the configuration of the clinical data-based learning unit may be variously changed according to the type of clinical data.
  • the first clinical data is an example of a biological signal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change
  • the second clinical data is a bodily fluid generated by the user's body.
  • Urine, biopsy, and the like and the operation of learning at least one clinical data-based learning model based on this is illustrated in more detail.
  • the integrated lesion learning apparatus may build a learning model based on the first clinical data that learns the first clinical data.
  • the integrated lesion learning device can remove noise from the first clinical data, that is, bio-signals (e.g., ECG, PPG, EMG, etc.), and extract a diagnostic section to be used for learning or detection from the noise-removed bio-signals. have.
  • bio-signals e.g., ECG, PPG, EMG, etc.
  • the integrated lesion learning apparatus may construct a first clinical data-based learning model that learns by using the biosignal of the diagnosis section as input data and the lesion severity as a target variable. Since the first clinical data may be data configured in a sequential form, the first clinical data-based learning model can perform learning on the first clinical data based on the RNN (Recurrent Neural Network) method. have.
  • RNN Recurrent Neural Network
  • the integrated lesion learning apparatus may build a second clinical data-based learning model that performs learning on the second clinical data.
  • the second clinical data that is, the second clinical data detected from the body fluid generated by the user's body, urine, biopsy, etc.
  • the integrated lesion learning apparatus may normalize the second clinical data configured in various forms.
  • the integrated lesion learning apparatus may perform learning on a learning model based on the second clinical data that learns by using the normalized second clinical data as input data and using the lesion severity as a target variable.
  • the second clinical data-based learning model may be learned based on a Feed-Foward Neural Nework (FFNN) method.
  • FFNN Feed-Foward Neural Nework
  • the integrated lesion learning apparatus may receive the lesion prediction result provided in steps S1010 and S1020, and as an output corresponding thereto, an integrated learning model for learning the final lesion prediction result may be constructed.
  • the integrated learning model uses a plurality of lesion prediction results provided in steps S1010 and S1020 as input data, and outputs the same target variables as those set when learning the image-based learning model and the clinical data-based learning model. Can be set to Accordingly, the integrated learning model can build an ensemble model by learning weights between the outputs of the image-based learning model and the clinical data-based learning model.
  • FIG. 11 is a flowchart illustrating a procedure of a method for diagnosing a lesion according to an embodiment of the present disclosure.
  • a method for diagnosing a lesion according to an embodiment of the present disclosure may be performed by the apparatus for diagnosing a lesion.
  • the apparatus for diagnosing a lesion may receive a medical image and output an image-based lesion prediction result corresponding thereto.
  • the apparatus for diagnosing lesions may output a probability of developing a specific disease in a specific region included in the medical image as an image-based lesion prediction result using at least one image-based learning model.
  • the medical image is an image taken of the entire body or a specific diagnostic area through various methods of imaging, and includes a T2 (T2-weighted) image, an ADC (apparent diffusion coefficients) image, a STIR image, a T1 image, and a T1 with agents image.
  • And may include parametric MRI, X-ray images, CT images, such as FLAIR, and the like.
  • the lesion diagnosis apparatus may have a lesion area detection learning model, and the lesion area detection learning model is a medical image (hereinafter, referred to as' An original medical image') is received, and a medical image obtained by extracting a lesion area from the original medical image (hereinafter, referred to as a'lesion area image') may be detected.
  • the lesion area detection learning model is a medical image (hereinafter, referred to as' An original medical image') is received, and a medical image obtained by extracting a lesion area from the original medical image (hereinafter, referred to as a'lesion area image') may be detected.
  • a medical image may be selectively used based on the characteristics of a body organ or a diagnosis region, or a lesion existing in a body organ or diagnosis region.
  • the lesion diagnosis apparatus may select a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, or the like as an input of at least one image-based learning model.
  • the lesion diagnosis apparatus may select a STIR image, a T1 image, a T1 with Agents image, and a T2 image as inputs of at least one image-based learning model.
  • the lesion diagnosis apparatus may select a T1 image, a T2 image, or a FLAIR as an input of at least one image-based learning model.
  • the operation of detecting the lesion region image from the original medical image by the lesion diagnosis apparatus may be performed based on a convolutional neural network (CNN) technique or a pooling technique.
  • the lesion diagnosis apparatus may include an image-based severity learning model, and the image-based severity learning model is a learning model built based on a convolutional neural network (CNN) technique or a pooling technique. I can.
  • the image-based severity learning model may consist of learning models of different structures, and even when the same lesion area image is input, different image-based lesion prediction results can be output by different learning models.
  • the lesion diagnosis apparatus may further include an image-based integrated learning model that receives the image-based lesion prediction result and outputs the image-based lesion prediction result, and provides the image-based integrated lesion prediction result through the image-based integrated learning model. Can be calculated.
  • the apparatus for diagnosing lesions may receive at least one clinical data and output at least one lesion prediction result corresponding thereto.
  • the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like.
  • the lesion diagnosis apparatus receives data detected from a biological signal (e.g., ECG, PPG, EMG, etc.) or a body fluid generated by the user's body, urine, biopsy, etc., and responds to the aforementioned clinical data.
  • a model trained to output the probability of developing a specific disease as a lesion prediction result may be included.
  • the clinical data-based learning model may be configured to be classified according to the type of input data, and the lesion diagnosis apparatus may operate to classify the type of input data and provide it to a clinical data-based learning model.
  • the lesion diagnosis apparatus may remove noise from a biological signal (eg, ECG, PPG, EMG, etc.) input as first clinical data, and then extract a diagnostic section to be used for lesion detection from the noise-removed biological signal.
  • the lesion diagnosis apparatus may include a first clinical data-based learning model (830-1, 830-2, ... 830-n) that receives a biosignal of the extracted diagnosis section and outputs a lesion severity. , Through this, a lesion prediction result based on the first clinical data can be output.
  • the first clinical data-based learning model may be a model trained based on a recurrent neural network (RNN) method.
  • RNN recurrent neural network
  • the second clinical data detected from bodily fluids, urine, biopsy, etc. generated by the user's body may be configured in various forms, and different values may be represented according to each form. Accordingly, the lesion diagnosis apparatus may normalize data input as second clinical data, that is, data detected from a body fluid generated by a user's body, urine, biopsy, and the like. Thereafter, the lesion diagnosis apparatus may output a lesion prediction result based on the second clinical data through a second clinical data-based learning model that receives the normalized second clinical data and outputs a lesion severity.
  • the second clinical data-based learning model may be a model constructed based on a feed-forward neural network (FFNN) method.
  • FFNN feed-forward neural network
  • the lesion diagnosis apparatus may calculate a final lesion prediction result by combining an image-based lesion prediction result, a lesion prediction result based on the first clinical data, and a lesion prediction result based on the second clinical data.
  • the final lesion prediction result may be calculated through an integrated learning model built through ensemble learning.
  • the integrated learning model may be a learning model constructed by the lesion integrated learning apparatus 10 of FIG. 1 described above.
  • FIG. 12 is a block diagram illustrating a method and apparatus for integrated learning of a lesion and a computing system for executing the method and apparatus for diagnosing a lesion according to an embodiment of the present disclosure.
  • the computing system 1000 includes at least one processor 1100 connected through a bus 1200, a memory 1300, a user interface input device 1400, a user interface output device 1500, and a storage device. (1600), and a network interface (1700).
  • the processor 1100 may be a central processing unit (CPU) or a semiconductor device that processes instructions stored in the memory 1300 and/or the storage 1600.
  • the memory 1300 and the storage 1600 may include various types of volatile or nonvolatile storage media.
  • the memory 1300 may include read only memory (ROM) and random access memory (RAM).
  • the steps of the method or algorithm described in connection with the embodiments disclosed herein may be directly implemented in hardware executed by the processor 1100, a software module, or a combination of the two.
  • Software modules reside in storage media (i.e., memory 1300 and/or storage 1600) such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM. You may.
  • An exemplary storage medium is coupled to the processor 1100, which is capable of reading information from and writing information to the storage medium.
  • the storage medium may be integral with the processor 1100.
  • the processor and storage media may reside within an application specific integrated circuit (ASIC).
  • the ASIC may reside within the user terminal.
  • the processor and storage medium may reside as separate components within the user terminal.
  • exemplary methods of the present disclosure are expressed as a series of operations for clarity of description, but this is not intended to limit the order in which steps are performed, and each step may be performed simultaneously or in a different order if necessary.
  • the exemplary steps may include additional steps, other steps may be included excluding some steps, or may include additional other steps excluding some steps.
  • various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof.
  • one or more ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, or the like.
  • the scope of the present disclosure is software or machine-executable instructions (e.g., operating systems, applications, firmware, programs, etc.) that cause an operation according to the method of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium (non-transitory computer-readable medium) which stores instructions and the like and is executable on a device or a computer.
  • a non-transitory computer-readable medium non-transitory computer-readable medium

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente invention peut concerner un dispositif d'apprentissage intégré pour des lésions. Le dispositif d'apprentissage intégré pour des lésions peut comprendre : une unité d'apprentissage basée sur une image qui entraîne au moins un modèle d'apprentissage basé sur une image qui reçoit une image médicale et délivre un résultat de prédiction de lésion basé sur une image; au moins une unité d'apprentissage basée sur des données cliniques qui entraîne un modèle d'apprentissage basé sur des données cliniques qui reçoit des données cliniques et délivre un résultat de prédiction de lésion basé sur des données cliniques; et une unité d'apprentissage intégrée qui amène un modèle d'apprentissage intégré à effectuer un apprentissage d'ensemble, le modèle d'apprentissage intégré recevant le résultat de prédiction de lésion basé sur une image et le résultat de prédiction de lésion basé sur des données cliniques, et délivrant un résultat de prédiction de lésion final.
PCT/KR2020/006971 2019-05-29 2020-05-29 Système de prise en charge de diagnostic basé sur l'intelligence artificielle utilisant un algorithme d'apprentissage d'ensemble WO2020242239A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0063322 2019-05-29
KR1020190063322A KR102100698B1 (ko) 2019-05-29 2019-05-29 앙상블 학습 알고리즘을 이용한 인공지능 기반 진단 보조 시스템

Publications (1)

Publication Number Publication Date
WO2020242239A1 true WO2020242239A1 (fr) 2020-12-03

Family

ID=70912678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/006971 WO2020242239A1 (fr) 2019-05-29 2020-05-29 Système de prise en charge de diagnostic basé sur l'intelligence artificielle utilisant un algorithme d'apprentissage d'ensemble

Country Status (2)

Country Link
KR (1) KR102100698B1 (fr)
WO (1) WO2020242239A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409944A (zh) * 2021-06-25 2021-09-17 清华大学深圳国际研究生院 基于深度学习的阻塞性睡眠呼吸暂停检测方法及装置
US20220130544A1 (en) * 2020-10-23 2022-04-28 Remmie, Inc Machine learning techniques to assist diagnosis of ear diseases

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102100698B1 (ko) * 2019-05-29 2020-05-18 (주)제이엘케이 앙상블 학습 알고리즘을 이용한 인공지능 기반 진단 보조 시스템
WO2021162488A2 (fr) * 2020-02-10 2021-08-19 주식회사 바디프랜드 Méthode de prédiction de maladie et appareil prévu à cet effet
KR102605837B1 (ko) * 2020-07-13 2023-11-29 가톨릭대학교 산학협력단 다중 이미지를 이용한 암 진행/재발 예측 시스템 및 암 진행/재발 예측 방법
WO2022015000A1 (fr) * 2020-07-13 2022-01-20 가톨릭대학교 산학협력단 Système de prédiction de progression/rechute de cancer et procédé de prédiction de progression/rechute de cancer utilisant de multiples images
KR20220065927A (ko) * 2020-11-13 2022-05-23 (주)루티헬스 의료 영상 판독 장치 및 의료 영상 판독 방법
KR102317857B1 (ko) * 2020-12-14 2021-10-26 주식회사 뷰노 병변 판독 방법
KR102503609B1 (ko) * 2021-02-01 2023-02-24 주식회사 코스모스메딕 머신 러닝을 이용한 가상 환자 정보 생성 시스템 및 방법
KR102316525B1 (ko) * 2021-03-08 2021-10-22 주식회사 딥바이오 Turp 병리 이미지로부터 전립선암을 검출하기 위한 용도의 인공 뉴럴 네트워크를 학습하는 방법 및 이를 수행하는 컴퓨팅 시스템
KR102359362B1 (ko) * 2021-09-16 2022-02-08 주식회사 스카이랩스 Ppg 신호 감지 반지를 이용한 딥러닝 기반 혈압 추정 시스템

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170140757A (ko) * 2016-06-10 2017-12-21 한국전자통신연구원 임상 의사결정 지원 앙상블 시스템 및 이를 이용한 임상 의사결정 지원 방법
KR101857624B1 (ko) * 2017-08-21 2018-05-14 동국대학교 산학협력단 임상 정보를 반영한 의료 진단 방법 및 이를 이용하는 장치
KR20180057300A (ko) * 2016-11-22 2018-05-30 네이버 주식회사 딥 러닝을 이용하여 환자의 진단 이력으로부터 질병 예후를 예측하는 방법 및 시스템
KR101884609B1 (ko) * 2017-05-08 2018-08-02 (주)헬스허브 모듈화된 강화학습을 통한 질병 진단 시스템
KR20190030151A (ko) * 2017-09-13 2019-03-21 이재준 영상 분석 방법, 장치 및 컴퓨터 프로그램
KR102100698B1 (ko) * 2019-05-29 2020-05-18 (주)제이엘케이 앙상블 학습 알고리즘을 이용한 인공지능 기반 진단 보조 시스템

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170140757A (ko) * 2016-06-10 2017-12-21 한국전자통신연구원 임상 의사결정 지원 앙상블 시스템 및 이를 이용한 임상 의사결정 지원 방법
KR20180057300A (ko) * 2016-11-22 2018-05-30 네이버 주식회사 딥 러닝을 이용하여 환자의 진단 이력으로부터 질병 예후를 예측하는 방법 및 시스템
KR101884609B1 (ko) * 2017-05-08 2018-08-02 (주)헬스허브 모듈화된 강화학습을 통한 질병 진단 시스템
KR101857624B1 (ko) * 2017-08-21 2018-05-14 동국대학교 산학협력단 임상 정보를 반영한 의료 진단 방법 및 이를 이용하는 장치
KR20190030151A (ko) * 2017-09-13 2019-03-21 이재준 영상 분석 방법, 장치 및 컴퓨터 프로그램
KR102100698B1 (ko) * 2019-05-29 2020-05-18 (주)제이엘케이 앙상블 학습 알고리즘을 이용한 인공지능 기반 진단 보조 시스템

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220130544A1 (en) * 2020-10-23 2022-04-28 Remmie, Inc Machine learning techniques to assist diagnosis of ear diseases
CN113409944A (zh) * 2021-06-25 2021-09-17 清华大学深圳国际研究生院 基于深度学习的阻塞性睡眠呼吸暂停检测方法及装置

Also Published As

Publication number Publication date
KR102100698B1 (ko) 2020-05-18

Similar Documents

Publication Publication Date Title
WO2020242239A1 (fr) Système de prise en charge de diagnostic basé sur l'intelligence artificielle utilisant un algorithme d'apprentissage d'ensemble
WO2020235966A1 (fr) Dispositif et procédé de traitement d'une image médicale à l'aide de métadonnées prédites
WO2019132168A1 (fr) Système d'apprentissage de données d'images chirurgicales
WO2021049729A1 (fr) Procédé de prédiction de la probabilité de développer un cancer du poumon au moyen d'un modèle d'intelligence artificielle et dispositif d'analyse associé
WO2017022882A1 (fr) Appareil de classification de diagnostic pathologique d'image médicale, et système de diagnostic pathologique l'utilisant
WO2017095014A1 (fr) Système de diagnostic d'anomalie cellulaire utilisant un apprentissage dnn, et procédé de gestion de diagnostic de celui-ci
WO2021025461A1 (fr) Système de diagnostic à base d'image échographique pour lésion d'artère coronaire utilisant un apprentissage automatique et procédé de diagnostic
WO2020139009A1 (fr) Dispositif d'apprentissage de maladies cérébrovasculaires, dispositif de détection de maladies cérébrovasculaires, procédé d'apprentissage de maladies cérébrovasculaires et procédé de détection de maladies cérébrovasculaires
WO2020076133A1 (fr) Dispositif d'évaluation de validité pour la détection de région cancéreuse
WO2021071288A1 (fr) Procédé et dispositif de formation de modèle de diagnostic de fracture
WO2019235828A1 (fr) Système de diagnostic de maladie à deux faces et méthode associée
WO2021006522A1 (fr) Appareil de diagnostic d'image utilisant un modèle d'apprentissage profond et son procédé
WO2021137454A1 (fr) Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur
WO2020076135A1 (fr) Dispositif d'apprentissage à modèle d'apprentissage profond et procédé pour région cancéreuse
WO2021261808A1 (fr) Procédé permettant d'afficher un résultat de lecture de lésion
WO2020180135A1 (fr) Appareil et procédé de prédiction de maladie du cerveau, et appareil d'apprentissage pour prédire une maladie du cerveau
WO2023095989A1 (fr) Procédé et dispositif d'analyse d'images médicales à modalités multiples pour le diagnostic d'une maladie cérébrale
WO2021002669A1 (fr) Appareil et procédé pour construire un modèle d'apprentissage de lésion intégré, et appareil et procédé pour diagnostiquer une lésion à l'aide d'un modèle d'apprentissage de lésion intégré
WO2021201582A1 (fr) Procédé et dispositif permettant d'analyser des causes d'une lésion cutanée
WO2021225226A1 (fr) Dispositif et procédé de diagnostic d'alzheimer
WO2020222555A1 (fr) Dispositif et procédé d'analyse d'image
WO2023075303A1 (fr) Système d'aide au diagnostic endoscopique basé sur l'intelligence artificielle et son procédé de commande
WO2020116878A1 (fr) Dispositif de prédiction d'anévrisme intracrânien å l'aide d'une photo de fond d'oeil, et procédé de fourniture d'un résultat de prédiction d'anévrisme intracrânien
WO2019164273A1 (fr) Méthode et dispositif de prédiction de temps de chirurgie sur la base d'une image chirurgicale
WO2015099426A1 (fr) Procédé de segmentation de région d'infarctus cérébral

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20815321

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/04/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20815321

Country of ref document: EP

Kind code of ref document: A1