WO2020242239A1 - Artificial intelligence-based diagnosis support system using ensemble learning algorithm - Google Patents

Artificial intelligence-based diagnosis support system using ensemble learning algorithm Download PDF

Info

Publication number
WO2020242239A1
WO2020242239A1 PCT/KR2020/006971 KR2020006971W WO2020242239A1 WO 2020242239 A1 WO2020242239 A1 WO 2020242239A1 KR 2020006971 W KR2020006971 W KR 2020006971W WO 2020242239 A1 WO2020242239 A1 WO 2020242239A1
Authority
WO
WIPO (PCT)
Prior art keywords
lesion
image
learning
learning model
clinical data
Prior art date
Application number
PCT/KR2020/006971
Other languages
French (fr)
Korean (ko)
Inventor
김원태
강신욱
이명재
김동민
남동연
Original Assignee
(주)제이엘케이
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)제이엘케이 filed Critical (주)제이엘케이
Publication of WO2020242239A1 publication Critical patent/WO2020242239A1/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present disclosure relates to a deep learning model learning technology, and more specifically, using a method and apparatus for learning about lesions based on medical images and clinical data, and a learning model built on the basis of medical images and clinical data. It is a method and apparatus for diagnosing lesions.
  • Deep learning is the learning of a very large amount of data, and when new data is input, the highest probability is selected based on the learning result.
  • Such deep learning can operate adaptively according to an image, and since feature factors are automatically found in the process of learning a model based on data, there is a trend of increasing attempts to utilize this in the field of artificial intelligence.
  • Certain lesions may show signs in a specific area of the body, but some specific lesions may appear complexly in various areas of the body, and changes in the body may also appear complex. Therefore, it is difficult to detect a disease or lesion simply by considering only symptoms or signs appearing in a specific area of the patient.
  • a disease such as systemic leukoplakia, which is a kind of rheumatic disease, may exhibit simultaneous and multiple symptoms throughout the body.
  • An object of the present disclosure is to provide a method and apparatus for learning lesion severity by comprehensively considering an image of a body and a biological change occurring in the body.
  • Another technical problem of the present disclosure is to provide a method and apparatus for diagnosing a lesion that predicts the severity of a lesion by comprehensively reflecting an image of a body and a biological change occurring in the body.
  • Another technical task of the present disclosure is to provide a method and apparatus for complexly learning the progress of the disease, the relationship between diseases, the metastasis state, etc. by comprehensively reflecting and learning various symptoms or signs expressed in the body. will be.
  • Another technical problem of the present disclosure is to provide a diagnostic method and apparatus capable of comprehensively predicting the progress of the disease, the relationship between the diseases, the metastasis state, etc. by comprehensively reflecting the symptoms or signs that are variously expressed in the body. will be.
  • an apparatus for learning lesion integration includes an image-based learning unit that receives a medical image and learns at least one image-based learning model that outputs an image-based lesion prediction result, and a clinical data-based learning unit that receives clinical data and outputs a clinical data-based lesion prediction result.
  • Ensemble learning of at least one clinical data-based learning unit that learns the learning model of, and an integrated learning model that receives the image-based lesion prediction result and the clinical data-based lesion prediction result, and outputs the final lesion prediction result. ) May include an integrated learning unit.
  • a method for integrating lesion learning includes a process of learning at least one image-based learning model that receives a medical image and outputs an image-based lesion prediction result, receives a bio-signal, and outputs a bio-signal-based lesion prediction result corresponding to the bio-signal. At least one process of learning at least one bio-signal-based learning model, receiving at least one clinical information obtained through a clinical trial, and outputting a clinical data-based lesion prediction result corresponding to the at least one clinical information.
  • Ensemble an integrated learning model that receives the process of learning one clinical information-based learning model, the image-based lesion prediction result, the biosignal-based lesion prediction result, and the clinical data-based lesion prediction result, and outputs the final lesion prediction result. It may include a process of ensemble learning.
  • an apparatus for diagnosing a lesion may be provided.
  • the apparatus is an apparatus for diagnosing a lesion using a learning model obtained by learning the lesion, the apparatus comprising: an image-based prediction unit that outputs an image-based lesion prediction result corresponding to an input medical image using at least one image-based learning model; , Using a clinical data-based learning model, at least one clinical data-based prediction unit for outputting each clinical data-based lesion prediction result corresponding to the input clinical data, and the image-based lesion prediction result and the clinical data-based lesion It may include an integrated diagnosis unit that inputs the prediction result into the integrated learning model and checks the final lesion prediction result output through the integrated learning model.
  • a method for diagnosing a lesion may be provided.
  • the method is a method for diagnosing a lesion using a learning model that has learned the lesion, the process of outputting an image-based lesion prediction result corresponding to an input medical image using an image-based learning model, and a biosignal-based learning Using the model, the process of confirming the result of physiological signal-based lesion prediction corresponding to the input of the physiological signal, and using a clinical information-based learning model, based on clinical data corresponding to at least one clinical information acquired through a clinical trial
  • the process of confirming the lesion prediction result, the image-based lesion prediction result, the biosignal-based lesion prediction result, and the clinical data-based lesion prediction result are input into an integrated learning model, and the final lesion prediction result output through the integrated learning model. It may include a process of verifying.
  • a method and an apparatus for learning a lesion severity by comprehensively considering an image of a body and a biological change occurring in the body may be provided.
  • a method and apparatus for predicting a lesion severity by comprehensively considering an image of a body and a biological change occurring in the body may be provided.
  • a method and an apparatus for complexly learning the progress of a disease, a relationship between diseases, a metastasis state, etc. can be provided by learning by comprehensively reflecting and learning various symptoms or signs expressed in the body. have.
  • a method and apparatus capable of comprehensively predicting the progress of a disease, a relationship between diseases, a metastasis state, etc. may be provided by comprehensively reflecting various symptoms or symptoms expressed in the body. .
  • the diagnosis result is predicted through a complex learning model of disease progression, relationship between diseases, and metastasis based on a large amount of data on symptoms or signs that are variously expressed in the body. By doing so, it is possible to derive a diagnosis result with relatively high reliability compared to the result of diagnosis or determination based on the experience value.
  • FIG. 1 is a block diagram showing the configuration of a lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating a learning operation of a learning model provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 3 is a block diagram showing a detailed configuration of an image-based learning unit included in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 4 is a block diagram illustrating a detailed configuration of a first clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 5 is a block diagram illustrating a detailed configuration of a second clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 6 is a block diagram showing the configuration of a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • FIG. 7 is a block diagram illustrating a detailed configuration of an image-based detection unit included in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a block diagram illustrating a detailed configuration of a first clinical data-based detection unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is a block diagram illustrating a detailed configuration of a second clinical data-based learning unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • FIG. 10 is a flowchart illustrating a procedure of a method for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 11 is a flowchart illustrating a procedure of a method for diagnosing a lesion according to an embodiment of the present disclosure.
  • FIG. 12 is a block diagram illustrating a method and apparatus for integrated learning of a lesion and a computing system for executing the method and apparatus for diagnosing a lesion according to an embodiment of the present disclosure.
  • first and second are used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise stated. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is a first component in another embodiment. It can also be called.
  • components that are distinguished from each other are intended to clearly describe each characteristic, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated into one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Therefore, even if not stated otherwise, such integrated or distributed embodiments are also included in the scope of the present disclosure.
  • components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to components described in the various embodiments are included in the scope of the present disclosure.
  • FIG. 1 is a block diagram showing the configuration of a lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • the integrated lesion learning apparatus 10 may include an image-based learning unit 11, at least one clinical data-based learning unit 13, and an integrated learning unit 15.
  • the image-based learning unit 11 may learn an image-based learning model for receiving a medical image and outputting a lesion prediction result.
  • the image-based learning unit 11 may include a plurality of image-based learning models, and a probability of developing a specific disease in a specific region included in the medical image may be set as a target variable of the image-based learning model.
  • the detailed configuration and operation of the image-based learning unit 11 will be described in detail with reference to FIG. 3 attached below.
  • medical images are images of the entire body or a specific diagnostic area through various imaging techniques, such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images.
  • imaging techniques such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images.
  • the clinical data-based learning unit 13 may learn a clinical data-based learning model that receives clinical data and outputs lesion prediction results.
  • the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like.
  • the clinical data-based learning model receives data detected from biological signals (e.g., ECG, PPG, EMG, etc.) or body fluids, urine, biopsy, etc. generated by the user's body, and the aforementioned clinical data
  • the probability of developing a specific disease may be set as an objective variable of a clinical data-based learning model.
  • the clinical data-based learning model may be trained to output a lesion prediction result corresponding to the above-described clinical data.
  • At least one of the aforementioned clinical data-based learning units 13 may be provided in the lesion integrated learning unit 10, and each clinical data-based learning unit 13 may include a clinical data-based learning model.
  • Clinical data-based learning models include logistic regression, multi-layer perceptron, stochastic gradient descent, bagging, random forest, decision tree, support vector machine, k-nearest neighbor, multinomial logistic regression, multi-layer perceptron, stochastic gradient descent, random Learning may be performed based on at least one method among forest, decision tree, linear regression, bayesian regression, and kernel ridge regression.
  • a machine learning algorithm used for a clinical data-based learning model is illustrated, but the present disclosure is not limited thereto, and various types of machine learning algorithms may be used in addition to the illustrated algorithm.
  • the clinical data-based learning unit 13 may be configured to be classified according to the type of input data.
  • the clinical data-based learning unit 13 includes a first clinical data-based learning unit 13-1 that learns biological signals (eg, ECG, PPG, EMG, etc.), and detected from body fluids, urine, biopsy, etc.
  • a second clinical data-based learning unit 13-2 for learning data may be included.
  • the clinical data-based learning unit 13 checks the type of input data, selects the learning units 13-1 and 13-2 corresponding to the identified type, and provides the clinical data.
  • a bio-signal eg, ECG, PPG, EMG, etc.
  • the clinical data-based learning unit 13 inputs bio-signals (eg, ECG, PPG, EMG, etc.) to the first clinical data-based learning unit 13-1, and from bodily fluids, urine, biopsy, etc.
  • a user interface may be provided so that the detected data can be input to the second clinical data-based learning unit 13-2.
  • the clinical data-based learning unit 13 sets clinical data suitable for the first clinical data-based learning unit 13-1 and the second clinical data-based learning unit 13-2 in the design stage, and the user An environment in which corresponding clinical data is input to the first and second clinical data-based learning units 13-1 and 13-2 may be provided through the interface.
  • the integrated learning unit 15 can receive the lesion prediction result output from the image-based learning unit 11 and the clinical data-based learning unit 13, and as an output corresponding thereto, the integrated learning unit 15 learns the final lesion prediction result. It can contain a learning model. In particular, the integrated learning unit 15 performs ensemble learning on a plurality of lesion prediction results provided from the image-based learning unit 11 and the clinical data-based learning unit 13 to obtain the final lesion prediction results. Configurable.
  • the lesion prediction results output from the image-based learning unit 11 and the clinical data-based learning unit 13 are set as input data of the integrated learning model, and are set when learning the image-based learning model and the clinical data-based learning model.
  • the same objective variable as the one used can be set as the output data of the integrated learning model.
  • the integrated learning model can build an ensemble model by learning weights between the outputs of the image-based learning model and the clinical data-based learning model.
  • FIG. 2 is a diagram illustrating a learning operation of a learning model provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • the integrated learning data set 200 may be configured, and data obtained by combining a plurality of integrated learning data sets 200 may be configured as integrated learning data.
  • the integrated learning data set 200 may be configured to include medical image data 210, first clinical data 220, and second clinical data 230.
  • the integrated learning data set 200 includes an image reading result 215 including the lesion and the severity of the lesion for a specific area in the medical image, the first clinical reading including the lesion and the severity of the lesion for the first clinical data It may be configured to include a result 225, and a second clinical reading result 235 including the lesion and the severity of the lesion for the second clinical data, and the image reading result 215 and the first clinical reading result 225 ), and the second clinical reading result 235 may be configured to correspond to the medical image data 210, the first clinical data 220, and the second clinical data 230, respectively.
  • the integrated learning data set 200 includes the final reading result data including the lesion and the severity of the lesion determined based on the medical image data 210, the first clinical data 220, and the second clinical data 230 ( 250) can be configured to include.
  • the image-based learning model 21 may perform learning by receiving medical image data 210 and receiving an image reading result 215 as a target variable.
  • the first clinical data-based learning model 22 may receive the first clinical data 220 and receive the first clinical reading result 225 as a target variable to perform learning.
  • the second clinical data-based learning model 23 may perform learning by receiving the second clinical data 230 and receiving the second clinical reading result 235 as a target variable.
  • the integrated learning model 25 receives the image reading result 215, the first clinical reading result 225, the second clinical reading result 235, etc., and provides the final reading result data 250 as a target variable. Take it, you can perform learning. For example, the integrated learning model 25 generates each of the image readout result 215, the first clinical readout result 225, and the second clinical readout result 235 as one feature vector, and targets each of the feature vectors. You can create an ensemble model that trains a class for a variable.
  • the image-based learning model (21), the first clinical data-based learning model (22), the second clinical data-based learning model (23), and the integrated learning model (25) are a combination of a plurality of integrated learning data sets (200).
  • the learning model can be built by performing learning on the integrated learning data.
  • a second data set 270 may be configured.
  • the second data set 270 may be a validation set used for verification, or a new data set having information on the same characteristics and target variables as the first data set 260.
  • the present disclosure is not limited thereto.
  • learning about the image-based learning model 21, the first clinical data-based learning model 22, and the second clinical data-based learning model 23 is performed using the first data set 260, and By performing the learning of the integrated learning model 25 using the two data set 270, the learning of the integrated learning model 25 may be separately performed.
  • FIG. 3 is a block diagram showing a detailed configuration of an image-based learning unit included in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • the image-based learning unit 30 includes a lesion area detection unit 31, an image-based severity learning unit 32a, 32b, and 32c, and a plurality of image-based severity learning units 32a, 32b, and 32c. It may include an image-based integrated learning unit 35 for ensemble-learning each output image-based lesion prediction result.
  • the plurality of image-based severity learning units 32a, 32b, and 32c each have a different learning structure.
  • the lesion area detection unit 31 can receive a medical image (hereinafter referred to as'original medical image') photographing the user's body, and a medical image extracted from the original medical image (hereinafter,'' Lesion area image) can be detected.
  • the lesion area detection unit 31 may detect the lesion area image and provide the detection of the lesion area image as an input of the image-based severity learning units 32a, 32b, and 32c.
  • the operation of extracting the lesion area image from the original medical image by the lesion area detection unit 31 may be performed based on a convolutional neural network (CNN) technique or a pooling technique.
  • the lesion region detection unit 31 may include a lesion region detection learning model 310 through learning that takes an original medical image as an input and outputs the lesion region image.
  • the lesion region detection unit 31 inputs the original medical image into the lesion region detection learning model 310 and generates a lesion region image corresponding thereto through the lesion region detection learning model 310. Can be detected.
  • medical images may be selectively used based on the characteristics of each body organ or diagnosis region, or lesions present in the body organ or diagnosis region.
  • the image-based learning unit 30 may use a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, and the like as an input of the lesion region detection unit 31.
  • T2 T2-weighted
  • ADC apparent diffusion coefficients
  • the image-based learning unit 30 may use a STIR image, a T1 image, a T1 with Agents image, and a T2 image as an input of the lesion region detection unit 31.
  • the image-based learning unit 30 may use a T1 image, a T2 image, or a FLAIR as an input to the lesion region detection unit 31.
  • the image-based severity learning unit (32a, 32b, 32c) can learn the image-based severity learning model (320a, 320b, 320c), can receive a lesion region image as an analysis target image for performing the learning.
  • Learning of the image-based severity learning models 320a, 320b, and 320c may be performed by labeling a specific object included in the lesion area image or a severity level for a specific area.
  • the image-based severity learning unit (32a, 32b, 32c) is based on a convolutional neural network (CNN) technique or a pooling technique, for the image-based severity learning model (320a, 320b, 320c). Learning can be carried out.
  • CNN convolutional neural network
  • a pooling technique for the image-based severity learning model (320a, 320b, 320c). Learning can be carried out.
  • the image-based severity learning models 320a, 320b, and 320c may analyze an input image to extract features of an image.
  • the feature may be a local feature for each area of the image.
  • the image-based severity learning models 320a, 320b, and 320c may extract features of an input image using a general convolutional neural network (CNN) technique or a pooling technique.
  • the pooling technique may include at least one of a max pooling technique and an average pooling technique.
  • the pooling technique referred to in the present disclosure is not limited to the max pooling technique or the average pooling technique, and includes any technique for obtaining a representative value of an image region of a predetermined size.
  • the representative value used in the pooling technique may be at least one of a variance value, a standard deviation value, a mean value, a most frequent value, a minimum value, and a weighted average value, in addition to the maximum value and the average value.
  • the convolutional neural network of the present disclosure may be used to extract "features" such as a border, a line color, and the like from input data (images), and may include a plurality of layers. Each layer may receive input data and process the input data of the layer to generate output data.
  • the convolutional neural network may output an input image or a feature map generated by convolving an input feature map with filter kernels as output data.
  • the initial layers of the convolutional neural network can be operated to extract low-level features such as edges or gradients from the input.
  • the next layers of the neural network can gradually extract more complex features such as eyes, nose, etc.
  • the convolutional neural network may include a pooling layer in which a pooling operation is performed in addition to a convolutional layer in which a convolution operation is performed.
  • the pooling technique is a technique used to reduce the spatial size of data in the pooling layer.
  • the pooling technique includes a max pooling technique that selects the maximum value in a corresponding area and an average pooling technique that selects an average value of the area.
  • the max pooling technique is generally used. do.
  • the pooling window size and interval (stride) are set to the same value.
  • the stride refers to an interval to be moved when a filter is applied to input data, that is, an interval to which the filter is moved, and stride may also be used to adjust the size of output data.
  • the image-based severity learning models (320a, 320b, 320c) may be configured to build learning models of different structures, and the image-based lesion prediction results (321a, 320c) based on the learning models (320a, 320b, 320c), respectively. 321b, 321c) may be constructed to be output. Further, the image-based lesion prediction results 321a, 321b, and 321c provided by the learning models 320a, 320b, and 320c, respectively, may be configured to be provided as an input of the image-based integrated learning unit 35.
  • the image-based integrated learning unit 35 receives the image-based lesion prediction results 321a, 321b, 321c, and learns the image-based integrated lesion prediction result as an output corresponding thereto. 350 may be included.
  • the image-based integrated learning unit 35 targets a plurality of image-based lesion prediction results (321a, 321b, 321c) provided from the image-based severity learning models (320a, 320b, 320c) ensemble learning (Ensemble learning). Can be performed to construct an image-based integrated lesion prediction result.
  • clinical data is detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid, urine, biopsy, etc. generated by the user's body.
  • a biosignal eg, ECG, PPG, EMG, etc.
  • types of clinical data may be variously changed.
  • the configuration of the clinical data-based learning unit is exemplified based on the type of clinical data described above, but the configuration of the clinical data-based learning unit may be variously changed according to the type of clinical data.
  • the first clinical data-based learning unit exemplifies a configuration unit that receives a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change and performs learning of a learning model
  • the second clinical data-based learning unit exemplifies a component that performs learning of a learning model by receiving data detected from a body fluid, urine, biopsy, etc. generated by a user's body.
  • FIG. 4 is a block diagram illustrating a detailed configuration of a first clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • the first clinical data-based learning unit 40 includes a noise filter unit 41 that removes noise from an input biosignal (eg, ECG, PPG, EMG, etc.), and a noise-removed biosignal. It may include a diagnostic section extraction unit 42 for extracting a diagnostic section to be used for learning or detection.
  • a noise filter unit 41 that removes noise from an input biosignal (eg, ECG, PPG, EMG, etc.)
  • a diagnostic section extraction unit 42 for extracting a diagnostic section to be used for learning or detection.
  • the first clinical data-based learning unit 40 may include a lesion signal learning unit 43.
  • the lesion signal learning unit 43 is a first clinical data-based learning model (430-1, 430-2, ...430) that learns by using the biosignal of the diagnosis section as input data and using the lesion severity as a target variable. -n) may be included.
  • the first clinical data-based learning model (430-1, 430-2, ...430-n) is a recurrent neural network (RNN). ), it is possible to perform learning on the first clinical data.
  • RNN recurrent neural network
  • FIG. 5 is a block diagram illustrating a detailed configuration of a second clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • the second clinical data-based learning unit 50 may include a data normalization unit 51 and a lesion data learning unit 52.
  • the second clinical data detected from bodily fluids, urine, biopsy, etc. generated by the user's body may be configured in various forms, and each may represent different values. Accordingly, the data normalization unit 51 may normalize the second clinical data configured in various forms.
  • the lesion data learning unit 52 may include a second clinical data-based learning model 520 for learning by using the normalized second clinical data as input data and using the lesion severity as a target variable.
  • the second clinical data-based learning model 520 may perform learning on the second clinical data based on a Feed-Foward Neural Nework (FFNN) method.
  • FFNN Feed-Foward Neural Nework
  • FIG. 6 is a block diagram showing the configuration of a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • the lesion diagnosis apparatus 60 may include an image-based detection unit 61, at least one clinical data-based detection unit 63, and an integrated diagnosis unit 65.
  • the image-based detection unit 61 may include at least one image-based learning model 610 that receives a medical image and outputs an image-based lesion prediction result corresponding thereto, and includes at least one image-based learning model 610. Through this, the probability of developing a specific disease in a specific area included in the medical image can be output as a lesion prediction result.
  • medical images are images of the entire body or a specific diagnostic area through various imaging techniques, such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images.
  • imaging techniques such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images.
  • imaging techniques such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images.
  • T2 T2-weighted
  • ADC apparatus diffusion coefficients
  • the clinical data-based detection unit 63 may include clinical data-based learning models 630-1 and 630-2 for receiving clinical data and outputting lesion prediction results.
  • the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like.
  • the clinical data-based learning models 630-1 and 630-2 are detected from biosignals (eg, ECG, PPG, EMG, etc.) or body fluids, urine, biopsy, etc. generated by the user's body.
  • the model may be trained to receive data and output a probability of developing a specific disease in response to the above-described clinical data as a lesion prediction result.
  • At least one of the above-described clinical data-based detection unit 63 may be provided in the lesion diagnosis device 60, and each clinical data-based detection unit 63 includes clinical data-based learning models 630-1 and 630-2. Each can be included.
  • the clinical data-based detection unit 63 may be configured to be classified according to the type of input data.
  • the clinical data-based detection unit 63 includes a first clinical data-based detection unit 63-1 that inputs a biological signal (eg, ECG, PPG, EMG, etc.), and data detected from body fluids, urine, biopsy, etc.
  • a second clinical data-based detection unit 63-2 may be included as an input.
  • the clinical data-based detection unit 63 may be configured to check the type of input data, select the detection units 63-1 and 63-2 corresponding to the identified type, and provide the corresponding clinical data.
  • biological signals e.g., ECG, PPG, EMG, etc.
  • data detected from bodily fluids, urine, biopsy, etc. are the second clinical data-based detection unit. It is also possible to provide a user interface so that the input can be entered as (63-2).
  • the integrated diagnosis unit 65 may receive a lesion prediction result output from the image-based detection unit 61 and the clinical data-based detection unit 63, and confirm the final lesion prediction result as an output corresponding thereto.
  • the integrated diagnosis unit 65 receives a plurality of lesion prediction results provided from the image-based detection unit 61 and the clinical data-based detection unit 63, and ensemble learning to provide a final lesion prediction result corresponding thereto.
  • the image-based learning model 610, clinical data-based learning models 630-1 and 630-2, and the integrated learning model 650 are learning models constructed by the lesion integrated learning device 10 of FIG. I can.
  • FIG. 7 is a block diagram illustrating a detailed configuration of an image-based detection unit included in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • the image-based detection unit 70 is output from the lesion area detection unit 71, the image-based severity detection units 72a, 72b, and 72c, and a plurality of image-based severity detection units 72a, 72b, and 72c, respectively. It may include an image-based integrated detection unit 75 that receives the image-based lesion prediction result and outputs an image-based integrated disease prediction result corresponding thereto.
  • the plurality of image-based severity detection units 72a, 72b, and 72c each have a different learning structure.
  • Medical images may be selectively used based on the characteristics of each body organ or diagnosis region or lesions present in the body organ or diagnosis region.
  • the image-based detection unit 70 may use a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, or the like as an input of the lesion region detection unit 71.
  • T2 T2-weighted
  • ADC apparent diffusion coefficients
  • the image-based detection unit 70 may use a STIR image, a T1 image, a T1 with Agents image, and a T2 image as an input of the lesion region detection unit 71.
  • the image-based detection unit 70 may use a T1 image, a T2 image, or a FLAIR as an input to the lesion region detection unit 71.
  • the lesion area detection unit 71 may include a lesion area detection learning model 710, and the lesion area detection learning model 710 is a medical image (hereinafter, referred to as' An original medical image') is received, and a medical image obtained by extracting a lesion area from the original medical image (hereinafter, referred to as a'lesion area image') may be detected.
  • the lesion region detection unit 71 may detect the lesion region image and provide the detection of the lesion region image as an input of the image-based severity detection units 72a, 72b, and 72c.
  • the operation of detecting the lesion area image from the original medical image by the lesion area detection unit 71 may be performed based on a convolutional neural network (CNN) technique or a pooling technique.
  • the lesion region detection unit 71 may include a lesion region detection learning model 710 that receives an original medical image as an input and outputs the lesion region image.
  • the image-based severity detection units 72a, 72b, 72c may include image-based severity learning models 720a, 720b, and 720c, and the image-based severity learning models 720a, 720b, and 720c are convolutional neural networks. , CNN) technique or a pooling technique.
  • the image-based severity learning models 720a, 720b, and 720c may be composed of learning models having different structures, and even when the same lesion area image is input, different image-based lesion prediction results 721a are respectively used by different learning models. , 721b, 721c) can be output.
  • the image-based lesion prediction results 721a, 721b, and 721c output as described above may be provided as an input of the image-based integrated detection unit 75.
  • the image-based integrated detection unit 75 may include an image-based integrated learning model 750 that receives image-based lesion prediction results 721a, 721b, and 721c and outputs an image-based integrated lesion prediction result.
  • the lesion area detection learning model 710, the image-based severity learning models 720a, 720b, and 720c, the image-based integrated learning model 750, etc. are to be constructed by the image-based learning unit 30 of FIG. I can.
  • FIG. 8 is a block diagram illustrating a detailed configuration of a first clinical data-based detection unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • the first clinical data-based detection unit 80 includes a noise filter unit 81 that removes noise from an input bio-signal (eg, ECG, PPG, EMG, etc.), and a lesion from the noise-removed bio-signal. It may include a diagnostic section extraction unit 82 for extracting a diagnostic section to be used for detection.
  • a noise filter unit 81 that removes noise from an input bio-signal (eg, ECG, PPG, EMG, etc.), and a lesion from the noise-removed bio-signal.
  • a diagnostic section extraction unit 82 for extracting a diagnostic section to be used for detection.
  • the first clinical data-based detection unit 80 may include a lesion signal detection unit 83.
  • the lesion signal detection unit 83 may include a first clinical data-based learning model 830-1, 830-2, ... 830-n that receives a biosignal of a diagnosis section and outputs a lesion severity.
  • the first clinical data-based learning model (830-1, 830-2, ... 830-n) is a recurrent neural network (RNN). ) May be a model learned based on the method.
  • first clinical data-based learning models 830-1, 830-2, ... 830-n may be models constructed by the clinical data-based learning unit of FIG. 4 described above.
  • FIG. 9 is a block diagram showing a detailed configuration of a second clinical data-based learning unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • the second clinical data-based learning unit 90 may include a data normalization unit 91 and a lesion data detection unit 92.
  • the second clinical data detected from bodily fluids, urine, biopsy, etc. generated by the user's body are composed of various types, and different values may be displayed according to each type. Accordingly, the data normalization unit 91 may normalize the second clinical data configured in various forms.
  • the lesion data detection unit 92 may include a second clinical data-based learning model 920 that receives normalized second clinical data and outputs a lesion severity.
  • the second clinical data-based learning model 920 may be a model built based on the FFNN (Feed-Foward Neural Nework) method, for example, built by the clinical data-based learning unit of FIG. It can be a model.
  • FFNN Field-Foward Neural Nework
  • FIG. 10 is a flowchart illustrating a procedure of a method for learning lesion integration according to an embodiment of the present disclosure.
  • the method for learning integrated lesions according to an embodiment of the present disclosure may be performed by the apparatus for learning integrated lesions according to an embodiment of the present disclosure described above.
  • the integrated lesion learning apparatus may learn an image-based learning model that receives a medical image and outputs a lesion prediction result.
  • the integrated lesion learning apparatus may include a plurality of image-based learning models, and a probability of developing a specific disease in a specific region included in a medical image may be set as a target variable of the image-based learning model.
  • the medical image is an image taken of the entire body or a specific diagnostic area through various imaging techniques, and is a T2 (T2-weighted) image, an ADC (apparent diffusion coefficients) image, a STIR image, a T1 image, and a T1 with agents image.
  • the integrated lesion learning apparatus may receive a medical image (hereinafter referred to as'original medical image') photographing a user's body, and a medical image obtained by extracting a diagnosis area from the original medical image (hereinafter, referred to as'lesion area image). ') can be detected.
  • the operation of extracting the lesion region image from the original medical image by the integrated lesion learning apparatus may be performed based on a convolutional neural network (CNN) technique or a pooling technique.
  • CNN convolutional neural network
  • the integrated lesion learning apparatus may include a lesion region detection learning model through learning that takes an original medical image as an input and outputs a lesion region image.
  • the lesion integrated learning apparatus may input the original medical image into the lesion region detection learning model and detect a lesion region image corresponding thereto through the lesion region detection learning model.
  • medical images may be selectively used based on the characteristics of each body organ or diagnosis region, or lesions present in the body organ or diagnosis region.
  • the integrated lesion learning apparatus may use a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, or the like as an input of the lesion region detection learning model.
  • T2 T2-weighted
  • ADC apparent diffusion coefficients
  • the apparatus for integrated lesion learning may use an STIR image, a T1 image, a T1 with Agents image, and a T2 image as an input of the lesion region detection learning model.
  • the integrated lesion learning apparatus may use a T1 image, a T2 image, or a FLAIR as an input of a lesion region detection learning model.
  • the integrated lesion learning device can learn an image-based severity learning model, which can receive a lesion region image as an image to be analyzed for performing the learning, and label the severity of a specific object or specific region included in the lesion region image.
  • image-based severity learning model can be learned.
  • the image-based severity learning model may be learned based on a convolutional neural network (CNN) technique or a pooling technique.
  • CNN convolutional neural network
  • the image-based severity learning model may extract features of an image by analyzing an input image.
  • the feature may be a local feature for each area of the image.
  • the image-based severity learning model can extract features of an input image using a general convolutional neural network (CNN) technique or a pooling technique.
  • the pooling technique may include at least one of a max pooling technique and an average pooling technique.
  • the pooling technique referred to in the present disclosure is not limited to the max pooling technique or the average pooling technique, and includes any technique for obtaining a representative value of an image region of a predetermined size.
  • the representative value used in the pooling technique may be at least one of a variance value, a standard deviation value, a mean value, a most frequent value, a minimum value, and a weighted average value, in addition to the maximum value and the average value.
  • the convolutional neural network of the present disclosure may be used to extract “features” such as a border, a line color, and the like from input data (image), and may include a plurality of layers. Each layer may receive input data and may generate output data by processing the input data of the layer.
  • the convolutional neural network may output an input image or a feature map generated by convolving an input feature map with filter kernels as output data.
  • the initial layers of the convolutional neural network can be operated to extract low-level features such as edges or gradients from the input.
  • the next layers of the neural network can gradually extract more complex features such as eyes, nose, etc.
  • the convolutional neural network may include a convolutional layer in which a convolution operation is performed, as well as a pooling layer in which a pooling operation is performed.
  • the pooling technique is a technique used to reduce the spatial size of data in the pooling layer.
  • the pooling technique includes a max pooling technique that selects a maximum value in a corresponding domain and an average pooling technique that selects an average value of the domain.
  • the max pooling technique is generally used. do.
  • the window size and interval (stride) of the pooling are set to the same value.
  • the stride refers to an interval to be moved when a filter is applied to the input data, that is, an interval to which the filter is moved, and the stride may also be used to adjust the size of the output data.
  • At least one image-based severity learning model may be configured such that learning models having different structures are constructed.
  • the integrated lesion learning device builds a learning model that constructs the image-based integrated lesion prediction result by performing ensemble learning on at least one image-based lesion prediction result provided from the image-based severity learning model. can do.
  • an ensemble model can be built by learning the weights for the results.
  • the integrated lesion learning apparatus may learn at least one clinical data-based learning model that receives clinical data and outputs a lesion prediction result.
  • the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like.
  • the clinical data-based learning model receives data detected from biological signals (e.g., ECG, PPG, EMG, etc.) or body fluids, urine, biopsy, etc.
  • the probability of developing a specific disease may be set as an objective variable of a clinical data-based learning model. Accordingly, the clinical data-based learning model may be trained to output a lesion prediction result corresponding to the above-described clinical data.
  • clinical data is data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change, or a body fluid generated by the user's body, urine, biopsy, etc.
  • a biosignal eg, ECG, PPG, EMG, etc.
  • the present disclosure is not limited thereto, and the type of clinical data may be variously changed.
  • the configuration of the clinical data-based learning unit is exemplified based on the type of clinical data described above, but the configuration of the clinical data-based learning unit may be variously changed according to the type of clinical data.
  • the first clinical data is an example of a biological signal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change
  • the second clinical data is a bodily fluid generated by the user's body.
  • Urine, biopsy, and the like and the operation of learning at least one clinical data-based learning model based on this is illustrated in more detail.
  • the integrated lesion learning apparatus may build a learning model based on the first clinical data that learns the first clinical data.
  • the integrated lesion learning device can remove noise from the first clinical data, that is, bio-signals (e.g., ECG, PPG, EMG, etc.), and extract a diagnostic section to be used for learning or detection from the noise-removed bio-signals. have.
  • bio-signals e.g., ECG, PPG, EMG, etc.
  • the integrated lesion learning apparatus may construct a first clinical data-based learning model that learns by using the biosignal of the diagnosis section as input data and the lesion severity as a target variable. Since the first clinical data may be data configured in a sequential form, the first clinical data-based learning model can perform learning on the first clinical data based on the RNN (Recurrent Neural Network) method. have.
  • RNN Recurrent Neural Network
  • the integrated lesion learning apparatus may build a second clinical data-based learning model that performs learning on the second clinical data.
  • the second clinical data that is, the second clinical data detected from the body fluid generated by the user's body, urine, biopsy, etc.
  • the integrated lesion learning apparatus may normalize the second clinical data configured in various forms.
  • the integrated lesion learning apparatus may perform learning on a learning model based on the second clinical data that learns by using the normalized second clinical data as input data and using the lesion severity as a target variable.
  • the second clinical data-based learning model may be learned based on a Feed-Foward Neural Nework (FFNN) method.
  • FFNN Feed-Foward Neural Nework
  • the integrated lesion learning apparatus may receive the lesion prediction result provided in steps S1010 and S1020, and as an output corresponding thereto, an integrated learning model for learning the final lesion prediction result may be constructed.
  • the integrated learning model uses a plurality of lesion prediction results provided in steps S1010 and S1020 as input data, and outputs the same target variables as those set when learning the image-based learning model and the clinical data-based learning model. Can be set to Accordingly, the integrated learning model can build an ensemble model by learning weights between the outputs of the image-based learning model and the clinical data-based learning model.
  • FIG. 11 is a flowchart illustrating a procedure of a method for diagnosing a lesion according to an embodiment of the present disclosure.
  • a method for diagnosing a lesion according to an embodiment of the present disclosure may be performed by the apparatus for diagnosing a lesion.
  • the apparatus for diagnosing a lesion may receive a medical image and output an image-based lesion prediction result corresponding thereto.
  • the apparatus for diagnosing lesions may output a probability of developing a specific disease in a specific region included in the medical image as an image-based lesion prediction result using at least one image-based learning model.
  • the medical image is an image taken of the entire body or a specific diagnostic area through various methods of imaging, and includes a T2 (T2-weighted) image, an ADC (apparent diffusion coefficients) image, a STIR image, a T1 image, and a T1 with agents image.
  • And may include parametric MRI, X-ray images, CT images, such as FLAIR, and the like.
  • the lesion diagnosis apparatus may have a lesion area detection learning model, and the lesion area detection learning model is a medical image (hereinafter, referred to as' An original medical image') is received, and a medical image obtained by extracting a lesion area from the original medical image (hereinafter, referred to as a'lesion area image') may be detected.
  • the lesion area detection learning model is a medical image (hereinafter, referred to as' An original medical image') is received, and a medical image obtained by extracting a lesion area from the original medical image (hereinafter, referred to as a'lesion area image') may be detected.
  • a medical image may be selectively used based on the characteristics of a body organ or a diagnosis region, or a lesion existing in a body organ or diagnosis region.
  • the lesion diagnosis apparatus may select a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, or the like as an input of at least one image-based learning model.
  • the lesion diagnosis apparatus may select a STIR image, a T1 image, a T1 with Agents image, and a T2 image as inputs of at least one image-based learning model.
  • the lesion diagnosis apparatus may select a T1 image, a T2 image, or a FLAIR as an input of at least one image-based learning model.
  • the operation of detecting the lesion region image from the original medical image by the lesion diagnosis apparatus may be performed based on a convolutional neural network (CNN) technique or a pooling technique.
  • the lesion diagnosis apparatus may include an image-based severity learning model, and the image-based severity learning model is a learning model built based on a convolutional neural network (CNN) technique or a pooling technique. I can.
  • the image-based severity learning model may consist of learning models of different structures, and even when the same lesion area image is input, different image-based lesion prediction results can be output by different learning models.
  • the lesion diagnosis apparatus may further include an image-based integrated learning model that receives the image-based lesion prediction result and outputs the image-based lesion prediction result, and provides the image-based integrated lesion prediction result through the image-based integrated learning model. Can be calculated.
  • the apparatus for diagnosing lesions may receive at least one clinical data and output at least one lesion prediction result corresponding thereto.
  • the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like.
  • the lesion diagnosis apparatus receives data detected from a biological signal (e.g., ECG, PPG, EMG, etc.) or a body fluid generated by the user's body, urine, biopsy, etc., and responds to the aforementioned clinical data.
  • a model trained to output the probability of developing a specific disease as a lesion prediction result may be included.
  • the clinical data-based learning model may be configured to be classified according to the type of input data, and the lesion diagnosis apparatus may operate to classify the type of input data and provide it to a clinical data-based learning model.
  • the lesion diagnosis apparatus may remove noise from a biological signal (eg, ECG, PPG, EMG, etc.) input as first clinical data, and then extract a diagnostic section to be used for lesion detection from the noise-removed biological signal.
  • the lesion diagnosis apparatus may include a first clinical data-based learning model (830-1, 830-2, ... 830-n) that receives a biosignal of the extracted diagnosis section and outputs a lesion severity. , Through this, a lesion prediction result based on the first clinical data can be output.
  • the first clinical data-based learning model may be a model trained based on a recurrent neural network (RNN) method.
  • RNN recurrent neural network
  • the second clinical data detected from bodily fluids, urine, biopsy, etc. generated by the user's body may be configured in various forms, and different values may be represented according to each form. Accordingly, the lesion diagnosis apparatus may normalize data input as second clinical data, that is, data detected from a body fluid generated by a user's body, urine, biopsy, and the like. Thereafter, the lesion diagnosis apparatus may output a lesion prediction result based on the second clinical data through a second clinical data-based learning model that receives the normalized second clinical data and outputs a lesion severity.
  • the second clinical data-based learning model may be a model constructed based on a feed-forward neural network (FFNN) method.
  • FFNN feed-forward neural network
  • the lesion diagnosis apparatus may calculate a final lesion prediction result by combining an image-based lesion prediction result, a lesion prediction result based on the first clinical data, and a lesion prediction result based on the second clinical data.
  • the final lesion prediction result may be calculated through an integrated learning model built through ensemble learning.
  • the integrated learning model may be a learning model constructed by the lesion integrated learning apparatus 10 of FIG. 1 described above.
  • FIG. 12 is a block diagram illustrating a method and apparatus for integrated learning of a lesion and a computing system for executing the method and apparatus for diagnosing a lesion according to an embodiment of the present disclosure.
  • the computing system 1000 includes at least one processor 1100 connected through a bus 1200, a memory 1300, a user interface input device 1400, a user interface output device 1500, and a storage device. (1600), and a network interface (1700).
  • the processor 1100 may be a central processing unit (CPU) or a semiconductor device that processes instructions stored in the memory 1300 and/or the storage 1600.
  • the memory 1300 and the storage 1600 may include various types of volatile or nonvolatile storage media.
  • the memory 1300 may include read only memory (ROM) and random access memory (RAM).
  • the steps of the method or algorithm described in connection with the embodiments disclosed herein may be directly implemented in hardware executed by the processor 1100, a software module, or a combination of the two.
  • Software modules reside in storage media (i.e., memory 1300 and/or storage 1600) such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM. You may.
  • An exemplary storage medium is coupled to the processor 1100, which is capable of reading information from and writing information to the storage medium.
  • the storage medium may be integral with the processor 1100.
  • the processor and storage media may reside within an application specific integrated circuit (ASIC).
  • the ASIC may reside within the user terminal.
  • the processor and storage medium may reside as separate components within the user terminal.
  • exemplary methods of the present disclosure are expressed as a series of operations for clarity of description, but this is not intended to limit the order in which steps are performed, and each step may be performed simultaneously or in a different order if necessary.
  • the exemplary steps may include additional steps, other steps may be included excluding some steps, or may include additional other steps excluding some steps.
  • various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof.
  • one or more ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, or the like.
  • the scope of the present disclosure is software or machine-executable instructions (e.g., operating systems, applications, firmware, programs, etc.) that cause an operation according to the method of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium (non-transitory computer-readable medium) which stores instructions and the like and is executable on a device or a computer.
  • a non-transitory computer-readable medium non-transitory computer-readable medium

Abstract

According to the present invention, an integrated learning device for lesions can be provided. The integrated learning device for lesions may include: an image-based training unit which trains at least one image-based learning model that receives a medical image and outputs an image-based lesion prediction result; at least one clinical data-based training unit which trains a clinical data-based learning model that receives clinical data and outputs a clinical data-based lesion prediction result; and an integrated training unit which causes an integrated learning model to perform ensemble learning, the integrated learning model receiving the image-based lesion prediction result and the clinical data-based lesion prediction result, and outputting a final lesion prediction result.

Description

앙상블 학습 알고리즘을 이용한 인공지능 기반 진단 보조 시스템AI-based diagnostic assistance system using ensemble learning algorithm
본 개시는 딥러닝 모델 학습 기술에 관한 것이며, 보다 구체적으로는 의료영상과 임상 데이터를 기반으로 병변에 대한 학습을 수행하는 방법과 장치, 및 의료영상과 임상 데이터를 기반으로 구축된 학습모델을 사용하여 병변을 진단하는 방법과 장치에 대한 것이다.The present disclosure relates to a deep learning model learning technology, and more specifically, using a method and apparatus for learning about lesions based on medical images and clinical data, and a learning model built on the basis of medical images and clinical data. It is a method and apparatus for diagnosing lesions.
딥러닝(deep learning)은 매우 방대한 양의 데이터를 학습하여, 새로운 데이터가 입력될 경우 학습 결과를 바탕으로 확률적으로 가장 높은 답을 선택하는 것이다. 이러한, 딥러닝은 영상에 따라 적응적으로 동작할 수 있으며, 데이터에 기초하여 모델을 학습하는 과정에서 특성인자를 자동으로 찾아내기 때문에 최근 인공 지능 분야에서 이를 활용하려는 시도가 늘어나고 있는 추세이다.Deep learning is the learning of a very large amount of data, and when new data is input, the highest probability is selected based on the learning result. Such deep learning can operate adaptively according to an image, and since feature factors are automatically found in the process of learning a model based on data, there is a trend of increasing attempts to utilize this in the field of artificial intelligence.
병변(lesion, 病變)은 신체에서 발생되는 증상, 징후, 또는 생체의 변화를 통해 표출되므로, 일반적으로 의사는 환자의 신체에 발현되는 증상이나 징후, 또는 생체적 변화를 확인하여 병변을 결정한다. Since lesions are expressed through symptoms, signs, or changes in the body occurring in the body, doctors generally determine the lesions by checking the symptoms or signs or changes in the body of the patient.
특정 병변은 신체의 특정 영역에서의 징후가 나타날 수 있지만, 일부 특정 병변은 신체의 여러 영역에서 복합적으로 나타나고, 생체의 변화도 복합적으로 나타날 수 있다. 따라서, 단순히 환자의 특정 영역에서 나타나는 증상이나 징후만을 고려하여 질병이나 병변을 검출하기 어렵다. 예컨대, 류마티스 질환의 일종인 전신흥반루프스와 같은 질병은 신체 전체에서 동시 다발적인 증상을 나타낼 수 있다. Certain lesions may show signs in a specific area of the body, but some specific lesions may appear complexly in various areas of the body, and changes in the body may also appear complex. Therefore, it is difficult to detect a disease or lesion simply by considering only symptoms or signs appearing in a specific area of the patient. For example, a disease such as systemic leukoplakia, which is a kind of rheumatic disease, may exhibit simultaneous and multiple symptoms throughout the body.
본 개시의 기술적 과제는 신체를 촬영한 영상과, 신체에서 발생되는 생체적인 변화를 종합적으로 고려하여 병변 중증도를 학습하는 병변 통합 학습 방법 및 장치를 제공하는 것이다.An object of the present disclosure is to provide a method and apparatus for learning lesion severity by comprehensively considering an image of a body and a biological change occurring in the body.
본 개시의 다른 기술적 과제는 신체를 촬영한 영상과, 신체에서 발생되는 생체적인 변화를 종합적으로 반영하여 병변 중증도를 예측하는 병변 진단 방법 및 장치를 제공하는 것이다. Another technical problem of the present disclosure is to provide a method and apparatus for diagnosing a lesion that predicts the severity of a lesion by comprehensively reflecting an image of a body and a biological change occurring in the body.
본 개시의 또 다른 기술적 과제는 신체에서 다양하게 발현되는 증상이나, 징후를 종합적으로 반영하여 학습함으로써, 질병의 진행 상황, 질병 사이의 관련성, 전이 상태 등을 복합적으로 학습하는 방법 및 장치를 제공하는 것이다.Another technical task of the present disclosure is to provide a method and apparatus for complexly learning the progress of the disease, the relationship between diseases, the metastasis state, etc. by comprehensively reflecting and learning various symptoms or signs expressed in the body. will be.
본 개시의 또 다른 기술적 과제는 신체에서 다양하게 발현되는 증상이나, 징후를 종합적으로 반영하여, 질병의 진행 상황, 질병 사이의 관련성, 전이 상태 등을 복합적으로 예측할 수 있는 진단 방법 및 장치를 제공하는 것이다.Another technical problem of the present disclosure is to provide a diagnostic method and apparatus capable of comprehensively predicting the progress of the disease, the relationship between the diseases, the metastasis state, etc. by comprehensively reflecting the symptoms or signs that are variously expressed in the body. will be.
본 개시에서 이루고자 하는 기술적 과제들은 이상에서 언급한 기술적 과제들로 제한되지 않으며, 언급하지 않은 또 다른 기술적 과제들은 아래의 기재로부터 본 개시가 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.The technical problems to be achieved in the present disclosure are not limited to the technical problems mentioned above, and other technical problems that are not mentioned will be clearly understood by those of ordinary skill in the technical field to which the present disclosure belongs from the following description. I will be able to.
본 개시의 일 양상에 따르면, 병변 통합 학습 장치가 제공될 수 있다. 상기 장치는, 의료영상을 입력받고 영상기반 병변예측결과를 출력하는 적어도 하나의 영상기반 학습모델을 학습하는 영상기반 학습부와, 임상 데이터를 입력받고 임상 데이터 기반 병변예측결과를 출력하는 임상 데이터 기반의 학습모델을 학습하는 적어도 하나의 임상 데이터 기반 학습부와, 상기 영상기반 병변예측결과 및 상기 임상 데이터 기반 병변예측결과를 입력받고, 최종 병변예측결과를 출력하는 통합 학습모델을 앙상블 학습(Ensemble learning)하는 통합 학습부를 포함할 수 있다. According to an aspect of the present disclosure, an apparatus for learning lesion integration may be provided. The device includes an image-based learning unit that receives a medical image and learns at least one image-based learning model that outputs an image-based lesion prediction result, and a clinical data-based learning unit that receives clinical data and outputs a clinical data-based lesion prediction result. Ensemble learning of at least one clinical data-based learning unit that learns the learning model of, and an integrated learning model that receives the image-based lesion prediction result and the clinical data-based lesion prediction result, and outputs the final lesion prediction result. ) May include an integrated learning unit.
본 개시의 다른 양상에 따르면, 병변 통합 학습 방법이 제공될 수 있다. 상기 방법은, 의료영상을 입력받고 영상기반 병변예측결과를 출력하는 적어도 하나의 영상기반 학습모델을 학습하는 과정과, 생체신호를 입력받고, 상기 생체신호에 대응되는 생체신호 기반 병변예측결과를 출력으로 하는 적어도 하나의 생체신호 기반 학습모델을 학습하는 과정과, 임상시험을 통해 획득한 적어도 하나의 임상정보를 입력받고, 상기 적어도 하나의 임상정보에 대응되는 임상 데이터 기반 병변예측결과를 출력하는 적어도 하나의 임상정보 기반 학습모델을 학습하는 과정과, 상기 영상기반 병변예측결과, 생체신호 기반 병변예측결과, 및 임상 데이터 기반 병변예측결과를 입력받고, 최종 병변예측결과를 출력하는 통합 학습모델을 앙상블 학습(Ensemble learning)하는 과정을 포함할 수 있다. According to another aspect of the present disclosure, a method for integrating lesion learning may be provided. The method includes a process of learning at least one image-based learning model that receives a medical image and outputs an image-based lesion prediction result, receives a bio-signal, and outputs a bio-signal-based lesion prediction result corresponding to the bio-signal. At least one process of learning at least one bio-signal-based learning model, receiving at least one clinical information obtained through a clinical trial, and outputting a clinical data-based lesion prediction result corresponding to the at least one clinical information. Ensemble an integrated learning model that receives the process of learning one clinical information-based learning model, the image-based lesion prediction result, the biosignal-based lesion prediction result, and the clinical data-based lesion prediction result, and outputs the final lesion prediction result. It may include a process of ensemble learning.
본 개시의 또 다른 양상에 따르면, 병변 진단 장치가 제공될 수 있다. 상기 장치는 병변을 학습한 학습모델을 사용하여 병변을 진단하는 장치에 있어서, 적어도 하나의 영상기반 학습모델을 사용하여 입력된 의료영상에 대응되는 영상기반 병변예측결과를 출력하는 영상기반 예측부와, 임상 데이터 기반의 학습모델을 사용하여, 입력된 임상 데이터에 대응되는 임상 데이터 기반 병변예측결과를 각각 출력하는 적어도 하나의 임상 데이터 기반 예측부와, 상기 영상기반 병변예측결과 및 상기 임상 데이터 기반 병변예측결과를 통합 학습모델에 입력하고, 상기 통합 학습모델을 통해 출력되는 최종 병변예측결과를 확인하는 통합 진단부를 포함할 수 있다. According to another aspect of the present disclosure, an apparatus for diagnosing a lesion may be provided. The apparatus is an apparatus for diagnosing a lesion using a learning model obtained by learning the lesion, the apparatus comprising: an image-based prediction unit that outputs an image-based lesion prediction result corresponding to an input medical image using at least one image-based learning model; , Using a clinical data-based learning model, at least one clinical data-based prediction unit for outputting each clinical data-based lesion prediction result corresponding to the input clinical data, and the image-based lesion prediction result and the clinical data-based lesion It may include an integrated diagnosis unit that inputs the prediction result into the integrated learning model and checks the final lesion prediction result output through the integrated learning model.
본 개시의 또 다른 양상에 따르면, 병변 진단 방법이 제공될 수 있다. 상기 방법은 병변을 학습한 학습모델을 사용하여 병변을 진단하는 방법에 있어서, 영상기반 학습모델을 사용하여, 입력된 의료영상에 대응되는 영상기반 병변예측결과를 출력하는 과정과, 생체신호 기반 학습모델을 사용하여, 생체신호의 입력에 대응되는 생체신호 기반 병변예측결과를 확인하는 과정과, 임상정보 기반 학습모델을 사용하여, 임상시험을 통해 획득한 적어도 하나의 임상정보에 대응되는 임상 데이터 기반 병변예측결과를 확인하는 과정과, 상기 영상기반 병변예측결과, 생체신호 기반 병변예측결과, 및 임상 데이터 기반 병변예측결과를 통합 학습모델에 입력하고, 상기 통합 학습모델을 통해 출력되는 최종 병변예측결과를 확인하는 과정을 포함할 수 있다.According to another aspect of the present disclosure, a method for diagnosing a lesion may be provided. The method is a method for diagnosing a lesion using a learning model that has learned the lesion, the process of outputting an image-based lesion prediction result corresponding to an input medical image using an image-based learning model, and a biosignal-based learning Using the model, the process of confirming the result of physiological signal-based lesion prediction corresponding to the input of the physiological signal, and using a clinical information-based learning model, based on clinical data corresponding to at least one clinical information acquired through a clinical trial The process of confirming the lesion prediction result, the image-based lesion prediction result, the biosignal-based lesion prediction result, and the clinical data-based lesion prediction result are input into an integrated learning model, and the final lesion prediction result output through the integrated learning model. It may include a process of verifying.
본 개시에 대하여 위에서 간략하게 요약된 특징들은 후술하는 본 개시의 상세한 설명의 예시적인 양상일 뿐이며, 본 개시의 범위를 제한하는 것은 아니다.Features briefly summarized above with respect to the present disclosure are merely exemplary aspects of the detailed description of the present disclosure described below, and do not limit the scope of the present disclosure.
본 개시에 따르면, 신체를 촬영한 영상과, 신체에서 발생되는 생체적인 변화를 종합적으로 고려하여 병변 중증도를 학습하는 방법 및 장치가 제공될 수 있다. According to the present disclosure, a method and an apparatus for learning a lesion severity by comprehensively considering an image of a body and a biological change occurring in the body may be provided.
또한, 본 개시에 따르면, 신체를 촬영한 영상과, 신체에서 발생되는 생체적인 변화를 종합적으로 고려하여 병변 중증도를 예측하는 방법 및 장치가 제공될 수 있다. Further, according to the present disclosure, a method and apparatus for predicting a lesion severity by comprehensively considering an image of a body and a biological change occurring in the body may be provided.
또한, 본 개시에 따르면, 신체에서 다양하게 발현되는 증상이나, 징후를 종합적으로 반영하여 학습함으로써, 질병의 진행 상황, 질병 사이의 관련성, 전이 상태 등을 복합적으로 학습하는 방법 및 장치가 제공될 수 있다. In addition, according to the present disclosure, a method and an apparatus for complexly learning the progress of a disease, a relationship between diseases, a metastasis state, etc. can be provided by learning by comprehensively reflecting and learning various symptoms or signs expressed in the body. have.
또한, 본 개시에 따르면, 신체에서 다양하게 발현되는 증상이나, 징후를 종합적으로 반영하여, 질병의 진행 상황, 질병 사이의 관련성, 전이 상태 등을 복합적으로 예측할 수 있는 방법 및 장치가 제공될 수 있다. In addition, according to the present disclosure, a method and apparatus capable of comprehensively predicting the progress of a disease, a relationship between diseases, a metastasis state, etc. may be provided by comprehensively reflecting various symptoms or symptoms expressed in the body. .
또한, 본 개시에 따르면, 신체에서 다양하게 발현되는 증상이나, 징후에 대한 다량의 데이터를 기반으로 질병의 진행 상황, 질병 사이의 관련성, 전이 상태 등을 복합적으로 학습된 모델을 통해 진단 결과를 예측함으로써, 경험치를 기반으로 진단 또는 판단하는 결과에 비하여 상대적으로 신뢰도가 높은 진단 결과를 도출할 수 있다. In addition, according to the present disclosure, the diagnosis result is predicted through a complex learning model of disease progression, relationship between diseases, and metastasis based on a large amount of data on symptoms or signs that are variously expressed in the body. By doing so, it is possible to derive a diagnosis result with relatively high reliability compared to the result of diagnosis or determination based on the experience value.
본 개시에서 얻을 수 있는 효과는 이상에서 언급한 효과들로 제한되지 않으며, 언급하지 않은 또 다른 효과들은 아래의 기재로부터 본 개시가 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.The effects obtainable in the present disclosure are not limited to the effects mentioned above, and other effects not mentioned can be clearly understood by those of ordinary skill in the art from the following description. will be.
도 1은 본 개시의 일 실시예에 따른 병변 통합 학습 장치의 구성을 나타내는 블록도이다.1 is a block diagram showing the configuration of a lesion integrated learning apparatus according to an embodiment of the present disclosure.
도 2는 본 개시의 일 실시예에 따른 병변 통합 학습 장치에 구비된 학습 모델의 학습 동작을 설명하는 도면이다. 2 is a diagram illustrating a learning operation of a learning model provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
도 3은 본 개시의 일 실시예에 따른 병변 통합 학습 장치에 구비된 영상 기반 학습부의 상세 구성을 나타내는 블록도이다.3 is a block diagram showing a detailed configuration of an image-based learning unit included in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
도 4는 본 개시의 일 실시예에 따른 병변 통합 학습 장치에 구비된 제1임상 데이터 기반 학습부의 상세 구성을 나타내는 블록도이다.4 is a block diagram illustrating a detailed configuration of a first clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
도 5는 본 개시의 일 실시예에 따른 병변 통합 학습 장치에 구비된 제2임상 데이터 기반 학습부의 상세 구성을 나타내는 블록도이다.5 is a block diagram illustrating a detailed configuration of a second clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
도 6은 본 개시의 일 실시예에 따른 병변 진단 장치의 구성을 나타내는 블록도이다.6 is a block diagram showing the configuration of a lesion diagnosis apparatus according to an embodiment of the present disclosure.
도 7은 본 개시의 일 실시예에 따른 병변 진단 장치에 구비된 영상 기반 검출부의 상세 구성을 나타내는 블록도이다.7 is a block diagram illustrating a detailed configuration of an image-based detection unit included in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
도 8은 본 개시의 일 실시예에 따른 병변 진단 장치에 구비된 제1임상 데이터 기반 검출부의 상세 구성을 나타내는 블록도이다.8 is a block diagram illustrating a detailed configuration of a first clinical data-based detection unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
도 9는 본 개시의 일 실시예에 따른 병변 진단 장치에 구비된 제2임상 데이터 기반 학습부의 상세 구성을 나타내는 블록도이다.9 is a block diagram illustrating a detailed configuration of a second clinical data-based learning unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
도 10은 본 개시의 일 실시예에 따른 병변 통합 학습 방법의 순서를 나타내는 흐름도이다.10 is a flowchart illustrating a procedure of a method for learning lesion integration according to an embodiment of the present disclosure.
도 11은 본 개시의 일 실시예에 따른 병변 진단 방법의 순서를 나타내는 흐름도이다. 11 is a flowchart illustrating a procedure of a method for diagnosing a lesion according to an embodiment of the present disclosure.
도 12는 본 개시의 일 실시예에 따른 병변 통합 학습 방법 및 장치와, 병변 진단 방법 및 장치를 실행하는 컴퓨팅 시스템을 예시하는 블록도이다. 12 is a block diagram illustrating a method and apparatus for integrated learning of a lesion and a computing system for executing the method and apparatus for diagnosing a lesion according to an embodiment of the present disclosure.
이하에서는 첨부한 도면을 참고로 하여 본 개시의 실시예에 대하여 본 개시가 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 그러나, 본 개시는 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those of ordinary skill in the art may easily implement the embodiments. However, the present disclosure may be implemented in various different forms, and is not limited to the embodiments described herein.
본 개시의 실시예를 설명함에 있어서 공지 구성 또는 기능에 대한 구체적인 설명이 본 개시의 요지를 흐릴 수 있다고 판단되는 경우에는 그에 대한 상세한 설명은 생략한다. 그리고, 도면에서 본 개시에 대한 설명과 관계없는 부분은 생략하였으며, 유사한 부분에 대해서는 유사한 도면 부호를 붙였다.In describing an embodiment of the present disclosure, when it is determined that a detailed description of a known configuration or function may obscure the subject matter of the present disclosure, a detailed description thereof will be omitted. In addition, parts not related to the description of the present disclosure in the drawings are omitted, and similar reference numerals are attached to similar parts.
본 개시에 있어서, 어떤 구성요소가 다른 구성요소와 "연결", "결합" 또는 "접속"되어 있다고 할 때, 이는 직접적인 연결관계뿐만 아니라, 그 중간에 또 다른 구성요소가 존재하는 간접적인 연결관계도 포함할 수 있다. 또한 어떤 구성요소가 다른 구성요소를 "포함한다" 또는 "가진다"고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 배제하는 것이 아니라 또 다른 구성요소를 더 포함할 수 있는 것을 의미한다.In the present disclosure, when a component is "connected", "coupled" or "connected" with another component, this is not only a direct connection relationship, but an indirect connection relationship in which another component exists in the middle. It can also include. In addition, when a component "includes" or "has" other components, it means that other components may be further included, rather than excluding other components unless otherwise stated. .
본 개시에 있어서, 제1, 제2 등의 용어는 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용되며, 특별히 언급되지 않는 한 구성요소들간의 순서 또는 중요도 등을 한정하지 않는다. 따라서, 본 개시의 범위 내에서 일 실시예에서의 제1 구성요소는 다른 실시예에서 제2 구성요소라고 칭할 수도 있고, 마찬가지로 일 실시예에서의 제2 구성요소를 다른 실시예에서 제1 구성요소라고 칭할 수도 있다. In the present disclosure, terms such as first and second are used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise stated. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is a first component in another embodiment. It can also be called.
본 개시에 있어서, 서로 구별되는 구성요소들은 각각의 특징을 명확하게 설명하기 위함이며, 구성요소들이 반드시 분리되는 것을 의미하지는 않는다. 즉, 복수의 구성요소가 통합되어 하나의 하드웨어 또는 소프트웨어 단위로 이루어질 수도 있고, 하나의 구성요소가 분산되어 복수의 하드웨어 또는 소프트웨어 단위로 이루어질 수도 있다. 따라서, 별도로 언급하지 않더라도 이와 같이 통합된 또는 분산된 실시예도 본 개시의 범위에 포함된다. In the present disclosure, components that are distinguished from each other are intended to clearly describe each characteristic, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated into one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Therefore, even if not stated otherwise, such integrated or distributed embodiments are also included in the scope of the present disclosure.
본 개시에 있어서, 다양한 실시예에서 설명하는 구성요소들이 반드시 필수적인 구성요소들은 의미하는 것은 아니며, 일부는 선택적인 구성요소일 수 있다. 따라서, 일 실시예에서 설명하는 구성요소들의 부분집합으로 구성되는 실시예도 본 개시의 범위에 포함된다. 또한, 다양한 실시예에서 설명하는 구성요소들에 추가적으로 다른 구성요소를 포함하는 실시예도 본 개시의 범위에 포함된다. In the present disclosure, components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to components described in the various embodiments are included in the scope of the present disclosure.
이하, 첨부한 도면을 참조하여 본 개시의 실시예들에 대해서 설명한다.Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings.
도 1은 본 개시의 일 실시예에 따른 병변 통합 학습 장치의 구성을 나타내는 블록도이다.1 is a block diagram showing the configuration of a lesion integrated learning apparatus according to an embodiment of the present disclosure.
도 1을 참조하면, 병변 통합 학습 장치(10)는 영상 기반 학습부(11), 적어도 하나의 임상 데이터 기반 학습부(13), 및 통합 학습부(15)를 포함할 수 있다.Referring to FIG. 1, the integrated lesion learning apparatus 10 may include an image-based learning unit 11, at least one clinical data-based learning unit 13, and an integrated learning unit 15.
영상 기반 학습부(11)는 의료영상을 입력받고 병변예측결과를 출력하는 영상기반 학습모델을 학습할 수 있다. 특히, 영상 기반 학습부(11)는 복수의 영상기반 학습모델을 구비할 수 있으며, 의료영상에 포함된 특정 영역에서 특정 질병이 발현되는 확율이 영상기반 학습모델의 목적 변수로서 설정될 수 있다. 영상 기반 학습부(11)의 세부 구성 및 동작은 하기에 첨부된 도 3을 통해 상세히 설명한다.The image-based learning unit 11 may learn an image-based learning model for receiving a medical image and outputting a lesion prediction result. In particular, the image-based learning unit 11 may include a plurality of image-based learning models, and a probability of developing a specific disease in a specific region included in the medical image may be set as a target variable of the image-based learning model. The detailed configuration and operation of the image-based learning unit 11 will be described in detail with reference to FIG. 3 attached below.
나아가, 의료영상은 신체 전체 또는 특정 진단 영역을 다양한 방식의 촬영 기법을 통해 촬영한 영상으로서, T2(T2-weighted) 영상, ADC(apparent diffusion coefficients) 영상, STIR 영상, T1 영상, T1 with Agents 영상, FLAIR 등과 같은 파라메트릭 MRI, X-Ray 영상, CT 영상, 등을 포함할 수 있다.Furthermore, medical images are images of the entire body or a specific diagnostic area through various imaging techniques, such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images. And parametric MRI, X-ray images, CT images, such as FLAIR, and the like.
임상 데이터 기반 학습부(13)는 임상 데이터를 입력받고 병변예측결과를 출력하는 임상 데이터 기반 학습모델을 학습할 수 있다. 여기서, 임상 데이터는 사용자의 생체적인 변화를 측정한 생체신호(예, ECG, PPG, EMG 등) 또는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터를 포함할 수 있다. 이에 기초하여, 임상 데이터 기반 학습모델은, 생체신호(예, ECG, PPG, EMG 등) 또는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터를 입력받고, 전술한 임상 데이터에 대응하여 특정 질병이 발현되는 확율이 임상 데이터 기반 학습모델의 목적 변수로서 설정될 수 있다. 이에 따라, 임상 데이터 기반 학습모델은 전술한 임상 데이터에 대응되는 병변예측결과를 출력하도록 학습될 수 있다.The clinical data-based learning unit 13 may learn a clinical data-based learning model that receives clinical data and outputs lesion prediction results. Here, the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like. Based on this, the clinical data-based learning model receives data detected from biological signals (e.g., ECG, PPG, EMG, etc.) or body fluids, urine, biopsy, etc. generated by the user's body, and the aforementioned clinical data Corresponding to, the probability of developing a specific disease may be set as an objective variable of a clinical data-based learning model. Accordingly, the clinical data-based learning model may be trained to output a lesion prediction result corresponding to the above-described clinical data.
전술한 임상 데이터 기반 학습부(13)는 병변 통합 학습 장치(10)에 적어도 하나가 구비될 수 있으며, 각각의 임상 데이터 기반 학습부(13)에는 임상 데이터 기반 학습모델이 각각 포함될 수 있다. At least one of the aforementioned clinical data-based learning units 13 may be provided in the lesion integrated learning unit 10, and each clinical data-based learning unit 13 may include a clinical data-based learning model.
임상 데이터 기반 학습모델은, logistic regression, multi-layer perceptron, stochastic gradient descent, bagging, random forest, decision tree, support vector machine, k-nearest neighbor, multinomial logistic regression, multi-layer perceptron, stochastic gradient descent, random forest, decision tree, linear regression, bayesian regression, kernel ridge regression 중, 적어도 하나의 방식에 기초하여 학습을 수행할 수 있다. 본 개시의 일 실시예에서, 임상 데이터 기반 학습모델에 사용되는 기계학습 알고리즘을 예시하였으나, 본 개시가 이를 한정하는 것은 아니며, 예시된 알고리즘 외에 다양한 방식의 기계학습 알고리즘이 사용될 수 있다.Clinical data-based learning models include logistic regression, multi-layer perceptron, stochastic gradient descent, bagging, random forest, decision tree, support vector machine, k-nearest neighbor, multinomial logistic regression, multi-layer perceptron, stochastic gradient descent, random Learning may be performed based on at least one method among forest, decision tree, linear regression, bayesian regression, and kernel ridge regression. In an embodiment of the present disclosure, a machine learning algorithm used for a clinical data-based learning model is illustrated, but the present disclosure is not limited thereto, and various types of machine learning algorithms may be used in addition to the illustrated algorithm.
나아가, 임상 데이터 기반 학습부(13)는 입력되는 데이터의 종류에 따라 구분되도록 구성될 수 있다. 예컨대, 임상 데이터 기반 학습부(13)는 생체신호(예, ECG, PPG, EMG 등)를 학습하는 제1임상 데이터 기반 학습부(13-1)와, 체액, 소변, 생체검사 등으로부터 검출된 데이터를 학습하는 제2임상 데이터 기반 학습부(13-2)를 포함할 수 있다. Furthermore, the clinical data-based learning unit 13 may be configured to be classified according to the type of input data. For example, the clinical data-based learning unit 13 includes a first clinical data-based learning unit 13-1 that learns biological signals (eg, ECG, PPG, EMG, etc.), and detected from body fluids, urine, biopsy, etc. A second clinical data-based learning unit 13-2 for learning data may be included.
전술한 바에 기초하여, 임상 데이터 기반 학습부(13)는 입력되는 데이터의 종류를 확인하고, 확인된 종류에 대응되는 학습부(13-1, 13-2)를 선택하여, 해당 임상 데이터를 제공하도록 구성될 수 있다. 예컨대, 입력되는 임상 데이터가 생체신호(예, ECG, PPG, EMG 등)로 확인될 경우 해당 임상 데이터를 제1임상 데이터 기반 학습부(13-1)로 입력할 수 있으며, 입력되는 임상 데이터가 체액, 소변, 생체검사 등으로부터 검출된 데이터일 경우 해당 임상 데이터를 제2임상 데이터 기반 학습부(13-2)입력할 수 있다.Based on the above, the clinical data-based learning unit 13 checks the type of input data, selects the learning units 13-1 and 13-2 corresponding to the identified type, and provides the clinical data. Can be configured to For example, when the input clinical data is confirmed as a bio-signal (eg, ECG, PPG, EMG, etc.), the corresponding clinical data can be input to the first clinical data-based learning unit 13-1, and the input clinical data is In the case of data detected from body fluids, urine, and biopsy, the corresponding clinical data may be input to the second clinical data-based learning unit 13-2.
다른 예로서, 임상 데이터 기반 학습부(13)는 생체신호(예, ECG, PPG, EMG 등)는 제1임상 데이터 기반 학습부(13-1)로 입력되고, 체액, 소변, 생체검사 등으로부터 검출된 데이터는 제2임상 데이터 기반 학습부(13-2)로 입력될 수 있도록 사용자 인터페이스를 제공할 수도 있다. 예를 들어, 임상 데이터 기반 학습부(13)는 설계 단계에서 제1임상 데이터 기반 학습부(13-1)와 제2임상 데이터 기반 학습부(13-2)에 맞는 임상 데이터가 설정되고, 사용자 인터페이스를 통해 해당 임상 데이터를 제1 및 제2임상 데이터 기반 학습부(13-1, 13-2)에 입력되는 환경을 제공할 수 있다.As another example, the clinical data-based learning unit 13 inputs bio-signals (eg, ECG, PPG, EMG, etc.) to the first clinical data-based learning unit 13-1, and from bodily fluids, urine, biopsy, etc. A user interface may be provided so that the detected data can be input to the second clinical data-based learning unit 13-2. For example, the clinical data-based learning unit 13 sets clinical data suitable for the first clinical data-based learning unit 13-1 and the second clinical data-based learning unit 13-2 in the design stage, and the user An environment in which corresponding clinical data is input to the first and second clinical data-based learning units 13-1 and 13-2 may be provided through the interface.
임상 데이터 기반 학습부(13)의 세부적인 구조 및 동작은 하기의 도 4 및 도 5를 통해 상세히 설명한다.The detailed structure and operation of the clinical data-based learning unit 13 will be described in detail with reference to FIGS. 4 and 5 below.
한편, 통합 학습부(15)는 영상 기반 학습부(11)와 임상 데이터 기반 학습부(13)로부터 출력되는 병변예측결과를 입력받을 수 있으며, 이에 대응되는 출력으로서 최종 병변예측결과를 학습하는 통합 학습모델을 포함할 수 있다. 특히, 통합 학습부(15)는 영상 기반 학습부(11)와 임상 데이터 기반 학습부(13)로부터 제공되는 복수의 병변예측결과를 대상으로 앙상블 학습(Ensemble learning)을 수행하여 최종 병변예측결과를 구성할 수 있다.On the other hand, the integrated learning unit 15 can receive the lesion prediction result output from the image-based learning unit 11 and the clinical data-based learning unit 13, and as an output corresponding thereto, the integrated learning unit 15 learns the final lesion prediction result. It can contain a learning model. In particular, the integrated learning unit 15 performs ensemble learning on a plurality of lesion prediction results provided from the image-based learning unit 11 and the clinical data-based learning unit 13 to obtain the final lesion prediction results. Configurable.
구체적으로, 영상 기반 학습부(11)와 임상 데이터 기반 학습부(13)에서 출력되는 병변예측결과를 통합 학습모델의 입력 데이터로서 설정되고, 영상 기반 학습모델과 임상 데이터 기반 학습모델의 학습 시 설정하였던 목적 변수와 동일한 목적 변수를 통합 학습모델의 출력 데이터로 설정할 수 있다. 이에 따라, 통합 학습모델은 영상 기반 학습모델과 임상 데이터 기반 학습모델의 출력 간 가중치를 학습시켜 앙상블 모델을 구축할 수 있다. Specifically, the lesion prediction results output from the image-based learning unit 11 and the clinical data-based learning unit 13 are set as input data of the integrated learning model, and are set when learning the image-based learning model and the clinical data-based learning model. The same objective variable as the one used can be set as the output data of the integrated learning model. Accordingly, the integrated learning model can build an ensemble model by learning weights between the outputs of the image-based learning model and the clinical data-based learning model.
도 2는 본 개시의 일 실시예에 따른 병변 통합 학습 장치에 구비된 학습 모델의 학습 동작을 설명하는 도면이다. 2 is a diagram illustrating a learning operation of a learning model provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
*도 2에서는, 본 개시의 일 실시예에 따른 병변 통합 학습 장치에, 영상 기반 학습모델(21), 제1임상 데이터 기반 학습모델(22), 제2임상 데이터 기반 학습모델(23), 및 통합 학습모델(25)이 포함되는 것을 예시한다. * In FIG. 2, in the integrated lesion learning apparatus according to an embodiment of the present disclosure, an image-based learning model 21, a first clinical data-based learning model 22, a second clinical data-based learning model 23, and It illustrates that the integrated learning model 25 is included.
통합 학습 장치에 구비된 학습 모델을 학습하기 위하여, 통합 학습 데이터 셋(200)을 구성할 수 있으며, 다수의 통합 학습 데이터 셋(200)이 조합된 데이터를 통합학습 데이터로 구성할 수 있다. In order to learn a learning model provided in the integrated learning device, the integrated learning data set 200 may be configured, and data obtained by combining a plurality of integrated learning data sets 200 may be configured as integrated learning data.
통합 학습 데이터 셋(200)은 의료영상 데이터(210), 제1임상 데이터(220), 및 제2임상 데이터(230)을 포함하도록 구성될 수 있다. 그리고, 통합 학습 데이터 셋(200)은 의료영상에서 특정 영역에 대한 병변 및 병변의 중증도가 포함된 영상 판독 결과(215), 제1임상 데이터에 대한 병변 및 병변의 중증도가 포함된 제1임상 판독 결과(225), 및 제2임상 데이터에 대한 병변 및 병변의 중증도가 포함된 제2임상 판독 결과(235)를 포함하도록 구성될 수 있으며, 영상 판독 결과(215), 제1임상 판독 결과(225), 및 제2임상 판독 결과(235)는 각각 의료영상 데이터(210), 제1임상 데이터(220), 및 제2임상 데이터(230)에 대응되도록 구성될 수 있다. 또한, 통합 학습 데이터 셋(200)은 의료영상 데이터(210), 제1임상 데이터(220), 및 제2임상 데이터(230)를 기반으로 결정된 병변 및 병변의 중증도가 포함된 최종판독결과 데이터(250)를 포함하도록 구성될 수 있다.The integrated learning data set 200 may be configured to include medical image data 210, first clinical data 220, and second clinical data 230. In addition, the integrated learning data set 200 includes an image reading result 215 including the lesion and the severity of the lesion for a specific area in the medical image, the first clinical reading including the lesion and the severity of the lesion for the first clinical data It may be configured to include a result 225, and a second clinical reading result 235 including the lesion and the severity of the lesion for the second clinical data, and the image reading result 215 and the first clinical reading result 225 ), and the second clinical reading result 235 may be configured to correspond to the medical image data 210, the first clinical data 220, and the second clinical data 230, respectively. In addition, the integrated learning data set 200 includes the final reading result data including the lesion and the severity of the lesion determined based on the medical image data 210, the first clinical data 220, and the second clinical data 230 ( 250) can be configured to include.
영상 기반 학습모델(21)은 의료영상 데이터(210)를 입력받고 목적변수로서 영상 판독 결과(215)를 제공받아, 학습을 수행할 수 있다. 그리고, 제1임상 데이터 기반 학습모델(22)은 제1임상 데이터(220)를 입력받고 목적변수로서 제1임상 판독 결과(225)를 제공받아, 학습을 수행할 수 있다. 마찬가지로, 제2임상 데이터 기반 학습모델(23)은 제2임상 데이터(230)를 입력받고 목적변수로서 제2임상 판독 결과(235)를 제공받아, 학습을 수행할 수 있다.The image-based learning model 21 may perform learning by receiving medical image data 210 and receiving an image reading result 215 as a target variable. In addition, the first clinical data-based learning model 22 may receive the first clinical data 220 and receive the first clinical reading result 225 as a target variable to perform learning. Likewise, the second clinical data-based learning model 23 may perform learning by receiving the second clinical data 230 and receiving the second clinical reading result 235 as a target variable.
또한, 통합 학습모델(25)은 영상 판독 결과(215), 제1임상 판독 결과(225), 제2임상 판독 결과(235) 등을 입력받고, 목적변수로서 최종판독결과 데이터(250)를 제공받아, 학습을 수행할 수 있다. 예컨대, 통합 학습모델(25)은 영상 판독 결과(215), 제1임상 판독 결과(225), 제2임상 판독 결과(235)의 각각을 하나의 특징 벡터로 생성하고, 각각의 특징 벡터로부터 목적변수에 대한 클래스를 학습시킨 앙상블 모델을 생성할 수 있다. In addition, the integrated learning model 25 receives the image reading result 215, the first clinical reading result 225, the second clinical reading result 235, etc., and provides the final reading result data 250 as a target variable. Take it, you can perform learning. For example, the integrated learning model 25 generates each of the image readout result 215, the first clinical readout result 225, and the second clinical readout result 235 as one feature vector, and targets each of the feature vectors. You can create an ensemble model that trains a class for a variable.
영상 기반 학습모델(21), 제1임상 데이터 기반 학습모델(22), 제2임상 데이터 기반 학습모델(23), 및 통합 학습모델(25)은, 다수의 통합 학습 데이터 셋(200)이 조합된 통합학습 데이터를 대상으로 학습을 수행하여, 학습모델을 구축할 수 있다. The image-based learning model (21), the first clinical data-based learning model (22), the second clinical data-based learning model (23), and the integrated learning model (25) are a combination of a plurality of integrated learning data sets (200). The learning model can be built by performing learning on the integrated learning data.
본 개시의 일실시예에서, 통합 학습 데이터 셋(200)을 사용하여 전술한 학습 모델의 학습을 수행하는 것을 예시하였으나, 본 개시가 이를 한정하는 것은 아니다. 다른 예로서, 의료영상 데이터(210), 제1임상 데이터(220), 제2임상 데이터(230), 영상 판독 결과(215), 제1임상 판독 결과(225), 제2임상 판독 결과(235) 등을 포함하는 제1데이터 셋(260)을 구성하고, 영상 판독 결과(215), 제1임상 판독 결과(225), 제2임상 판독 결과(235), 최종판독결과 데이터(250)를 포함하는 제2데이터 셋(270)을 구성할 수 있다. 나아가, 제2데이터 셋(270)은 검증을 위해 사용되는 검증 데이터 셋(validation set)이거나, 또는 제1데이터 셋(260)과 동일한 특징 및 목적 변수에 대한 정보를 보유한 새로운 데이터 집합일 수도 있다. In an embodiment of the present disclosure, it has been illustrated that learning of the above-described learning model is performed using the integrated learning data set 200, but the present disclosure is not limited thereto. As another example, medical image data 210, first clinical data 220, second clinical data 230, image reading result 215, first clinical reading result 225, second clinical reading result 235 ), etc., and includes the image reading result 215, the first clinical reading result 225, the second clinical reading result 235, and the final reading result data 250 A second data set 270 may be configured. Furthermore, the second data set 270 may be a validation set used for verification, or a new data set having information on the same characteristics and target variables as the first data set 260.
나아가, 전술한 일 실시예에서, 통합 학습 데이터 셋(200)을 사용하여 영상 기반 학습모델(21), 제1임상 데이터 기반 학습모델(22), 제2임상 데이터 기반 학습모델(23), 및 통합 학습모델(25)의 학습을 동시에 수행하는 것을 예시하였으나, 본 개시가 이를 한정하는 것은 아니다. 예컨대, 제1데이터 셋(260)을 사용하여 영상 기반 학습모델(21), 제1임상 데이터 기반 학습모델(22), 및 제2임상 데이터 기반 학습모델(23)에 대한 학습을 수행하고, 제2데이터 셋(270)을 사용하여 통합 학습모델(25)의 학습을 수행함으로써, 통합 학습모델(25)의 학습을 분리하여 수행할 수도 있다. Further, in the above-described embodiment, using the integrated learning data set 200, the image-based learning model 21, the first clinical data-based learning model 22, the second clinical data-based learning model 23, and Although it is illustrated that learning of the integrated learning model 25 is simultaneously performed, the present disclosure is not limited thereto. For example, learning about the image-based learning model 21, the first clinical data-based learning model 22, and the second clinical data-based learning model 23 is performed using the first data set 260, and By performing the learning of the integrated learning model 25 using the two data set 270, the learning of the integrated learning model 25 may be separately performed.
도 3은 본 개시의 일 실시예에 따른 병변 통합 학습 장치에 구비된 영상 기반 학습부의 상세 구성을 나타내는 블록도이다.3 is a block diagram showing a detailed configuration of an image-based learning unit included in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
도 3을 참조하면, 영상 기반 학습부(30)는 병변영역 검출부(31), 영상기반 중증도 학습부(32a, 32b, 32c), 및 복수의 영상기반 중증도 학습부(32a, 32b, 32c)로부터 각각 출력되는 영상기반의 병변예측결과를 앙상블 학습하는 영상기반 통합 학습부(35)를 포함할 수 있다. 여기서, 복수의 영상기반 중증도 학습부(32a, 32b, 32c)는 각각 서로 다른 학습 구조로 이루어지는 것이 바람직하다.Referring to FIG. 3, the image-based learning unit 30 includes a lesion area detection unit 31, an image-based severity learning unit 32a, 32b, and 32c, and a plurality of image-based severity learning units 32a, 32b, and 32c. It may include an image-based integrated learning unit 35 for ensemble-learning each output image-based lesion prediction result. Here, it is preferable that the plurality of image-based severity learning units 32a, 32b, and 32c each have a different learning structure.
이에 기초하여, 병변영역 검출부(31)는 사용자의 신체를 촬영한 의료영상(이하, '원본 의료영상'이라 함)를 입력받을 수 있으며, 원본 의료영상으로부터 진단 영역을 추출한 의료영상(이하, '병변영역 영상'이라 함)를 검출할 수 있다. 그리고, 병변영역 검출부(31)는 병변영역 영상을 검출하여 영상기반 중증도 학습부(32a, 32b, 32c)의 입력으로 제공할 수 있다.Based on this, the lesion area detection unit 31 can receive a medical image (hereinafter referred to as'original medical image') photographing the user's body, and a medical image extracted from the original medical image (hereinafter,'' Lesion area image) can be detected. In addition, the lesion area detection unit 31 may detect the lesion area image and provide the detection of the lesion area image as an input of the image-based severity learning units 32a, 32b, and 32c.
병변영역 검출부(31)가 원본 의료영상으로부터 병변영역 영상을 추출하는 동작은 합성곱 신경망(Convolutional Neural Network, CNN) 기법 또는 풀링(pooling) 기법 등에 기초하여 수행할 수 있다. 예를 들어, 병변영역 검출부(31)는 원본 의료영상을 입력으로 하고, 병변영역 영상을 출력으로 하는 학습을 통해 병변영역 검출 학습모델(310)을 포함할 수 있다. 그리고, 병변영역 검출부(31)는 원본 의료영상이 입력됨에 따라, 원본 의료영상을 병변영역 검출 학습모델(310)에 입력하고, 병변영역 검출 학습모델(310)을 통해 이에 대응되는 병변영역 영상을 검출할 수 있다.The operation of extracting the lesion area image from the original medical image by the lesion area detection unit 31 may be performed based on a convolutional neural network (CNN) technique or a pooling technique. For example, the lesion region detection unit 31 may include a lesion region detection learning model 310 through learning that takes an original medical image as an input and outputs the lesion region image. In addition, as the original medical image is input, the lesion region detection unit 31 inputs the original medical image into the lesion region detection learning model 310 and generates a lesion region image corresponding thereto through the lesion region detection learning model 310. Can be detected.
나아가, 각 신체 기관 또는 진단 영역이나, 신체 기관 또는 진단 영역에 존재하는 병변의 특성에 기초하여 의료영상이 선택적으로 사용될 수 있다. Furthermore, medical images may be selectively used based on the characteristics of each body organ or diagnosis region, or lesions present in the body organ or diagnosis region.
예컨대, 신체 기관 또는 진단 영역이 전립선 영역인 경우, 영상 기반 학습부(30)는 T2(T2-weighted) 영상, ADC(apparent diffusion coefficients) 영상 등을 병변영역 검출부(31)의 입력으로 사용할 수 있다. 다른 예로서, 신체 기관 또는 진단 영역이 간 영역인 경우, 영상 기반 학습부(30)는 STIR 영상, T1 영상, T1 with Agents 영상, T2 영상 등을 병변영역 검출부(31)의 입력으로 사용할 수 있다. 또 다른 예로서, 신체 기관 또는 진단 영역이 뇌 영역인 경우, 영상 기반 학습부(30)는 T1 영상, T2 영상, FLAIR 등을 병변영역 검출부(31)의 입력으로 사용할 수 있다.For example, when a body organ or diagnosis region is a prostate region, the image-based learning unit 30 may use a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, and the like as an input of the lesion region detection unit 31. . As another example, when a body organ or diagnosis region is a liver region, the image-based learning unit 30 may use a STIR image, a T1 image, a T1 with Agents image, and a T2 image as an input of the lesion region detection unit 31. . As another example, when a body organ or a diagnosis region is a brain region, the image-based learning unit 30 may use a T1 image, a T2 image, or a FLAIR as an input to the lesion region detection unit 31.
한편, 영상기반 중증도 학습부(32a, 32b, 32c)는 영상기반 중증도 학습모델(320a, 320b, 320c)을 학습할 수 있는데, 학습을 수행하기 위한 분석 대상 영상으로서 병변영역 영상을 입력받을 수 있으며, 병변영역 영상에 포함된 특정 객체 또는 특정 영역에 대한 중증도를 라벨링(Labeling)함으로써, 영상기반 중증도 학습모델(320a, 320b, 320c)의 학습을 수행할 수 있다. On the other hand, the image-based severity learning unit (32a, 32b, 32c) can learn the image-based severity learning model (320a, 320b, 320c), can receive a lesion region image as an analysis target image for performing the learning. , Learning of the image-based severity learning models 320a, 320b, and 320c may be performed by labeling a specific object included in the lesion area image or a severity level for a specific area.
나아가, 영상기반 중증도 학습부(32a, 32b, 32c)는 합성곱 신경망(Convolutional Neural Network, CNN) 기법 또는 풀링(pooling) 기법에 기초하여, 영상기반 중증도 학습모델(320a, 320b, 320c)에 대한 학습을 수행할 수 있다. Furthermore, the image-based severity learning unit (32a, 32b, 32c) is based on a convolutional neural network (CNN) technique or a pooling technique, for the image-based severity learning model (320a, 320b, 320c). Learning can be carried out.
예컨대, 영상기반 중증도 학습모델(320a, 320b, 320c)은 입력 영상을 분석하여 영상의 특징을 추출할 수 있다. 상기 특징은 영상의 각 영역마다의 국소적인 특징일 수 있다. 영상기반 중증도 학습모델(320a, 320b, 320c)은 일반적인 합성곱 신경망(Convolutional Neural Network, CNN) 기법 또는 풀링(pooling) 기법을 이용하여 입력 영상의 특징을 추출할 수 있다. 상기 풀링 기법은 맥스(max) 풀링 기법 및 평균(average) 풀링 기법 중 적어도 하나를 포함할 수 있다. 그러나, 본 개시에서 언급되는 풀링 기법은 맥스 풀링 기법 또는 평균 풀링 기법에 한정되지 않으며, 소정 크기의 영상 영역의 대표값을 획득하는 임의의 기법을 포함한다. 예컨대, 풀링 기법에 사용되는 대표값은 최대값 및 평균값 외에, 분산값, 표준 편차값, 중간값(mean value), 최빈값(most frequent value), 최소값, 가중 평균값 등 중 적어도 하나일 수 있다.For example, the image-based severity learning models 320a, 320b, and 320c may analyze an input image to extract features of an image. The feature may be a local feature for each area of the image. The image-based severity learning models 320a, 320b, and 320c may extract features of an input image using a general convolutional neural network (CNN) technique or a pooling technique. The pooling technique may include at least one of a max pooling technique and an average pooling technique. However, the pooling technique referred to in the present disclosure is not limited to the max pooling technique or the average pooling technique, and includes any technique for obtaining a representative value of an image region of a predetermined size. For example, the representative value used in the pooling technique may be at least one of a variance value, a standard deviation value, a mean value, a most frequent value, a minimum value, and a weighted average value, in addition to the maximum value and the average value.
본 개시의 합성곱 신경망은 입력 데이터(영상)로부터 테두리, 선 색 등과 같은 "특징들(features)"을 추출하기 위해 이용될 수 있으며, 복수의 계층들(layers)을 포함할 수 있다. 각각의 계층은 입력 데이터를 수신하고, 해당 계층의 입력 데이터를 처리하여 출력 데이터를 생성할 수 있다. 합성곱 신경망은 입력된 영상 또는 입력된 특징맵(feature map)을 필터 커널들(filter kernels)과 컨볼루션하여 생성한 특징맵을 출력 데이터로서 출력할 수 있다. 합성곱 신경망의 초기 계층들은 입력으로부터 에지들 또는 그레디언트들과 같은 낮은 레벨의 특징들을 추출하도록 동작될 수 있다. 신경망의 다음 계층들은 눈, 코 등과 같은 점진적으로 더 복잡한 특징들을 추출할 수 있다. The convolutional neural network of the present disclosure may be used to extract "features" such as a border, a line color, and the like from input data (images), and may include a plurality of layers. Each layer may receive input data and process the input data of the layer to generate output data. The convolutional neural network may output an input image or a feature map generated by convolving an input feature map with filter kernels as output data. The initial layers of the convolutional neural network can be operated to extract low-level features such as edges or gradients from the input. The next layers of the neural network can gradually extract more complex features such as eyes, nose, etc.
합성곱 신경망은 컨볼루션 연산이 수행되는 합성곱 계층 외에도 풀링 연산이 수행되는 풀링 계층도 포함할 수 있다. 풀링 기법은 풀링 계층에서 데이터의 공간적 크기를 축소하는데 사용되는 기법이다. 구체적으로, 풀링 기법에는 해당 영역에서 최대값을 선택하는 맥스 풀링(max pooling) 기법과 해당 영역의 평균값을 선택하는 평균 풀링(average pooling) 기법이 있으며, 이미지 인식 분야에서는 일반적으로 맥스 풀링 기법이 사용된다. 풀링 기법에서는 일반적으로 풀링의 윈도우 크기와 간격(스트라이드, stride)을 같은 값으로 설정한다. 여기서, 스트라이드란 입력 데이터에 필터를 적용할 때 이동할 간격을 조절하는 것, 즉 필터가 이동할 간격을 의미하며, 스트라이드 또한 출력 데이터의 크기를 조절하기 위해 사용될 수 있다. The convolutional neural network may include a pooling layer in which a pooling operation is performed in addition to a convolutional layer in which a convolution operation is performed. The pooling technique is a technique used to reduce the spatial size of data in the pooling layer. Specifically, the pooling technique includes a max pooling technique that selects the maximum value in a corresponding area and an average pooling technique that selects an average value of the area. In the image recognition field, the max pooling technique is generally used. do. In the pooling technique, generally, the pooling window size and interval (stride) are set to the same value. Here, the stride refers to an interval to be moved when a filter is applied to input data, that is, an interval to which the filter is moved, and stride may also be used to adjust the size of output data.
특히, 영상기반 중증도 학습모델(320a, 320b, 320c)은 서로 다른 구조의 학습모델이 구축되도록 구성될 수 있으며, 각각 학습모델(320a, 320b, 320c)에 의해 영상기반의 병변예측결과(321a, 321b, 321c)가 출력되도록 구축될 수 있다. 그리고, 각각 학습모델(320a, 320b, 320c)에서 제공되는 영상기반의 병변예측결과(321a, 321b, 321c)는 영상기반 통합학습부(35)의 입력으로 제공되도록 구성될 수 있다.In particular, the image-based severity learning models (320a, 320b, 320c) may be configured to build learning models of different structures, and the image-based lesion prediction results (321a, 320c) based on the learning models (320a, 320b, 320c), respectively. 321b, 321c) may be constructed to be output. Further, the image-based lesion prediction results 321a, 321b, and 321c provided by the learning models 320a, 320b, and 320c, respectively, may be configured to be provided as an input of the image-based integrated learning unit 35.
이에 대응하여, 영상기반 통합학습부(35)는 영상기반의 병변예측결과(321a, 321b, 321c)를 입력받고, 이에 대응되는 출력으로서 영상기반의 통합 병변예측결과를 학습하는 영상기반 통합 학습모델(350)을 포함할 수 있다. 특히, 영상기반 통합학습부(35)는 영상기반 중증도 학습모델(320a, 320b, 320c)로부터 제공되는 복수의 영상기반의 병변예측결과(321a, 321b, 321c)를 대상으로 앙상블 학습(Ensemble learning)을 수행하여 영상기반의 통합 병변예측결과를 구성할 수 있다.In response, the image-based integrated learning unit 35 receives the image-based lesion prediction results 321a, 321b, 321c, and learns the image-based integrated lesion prediction result as an output corresponding thereto. 350 may be included. In particular, the image-based integrated learning unit 35 targets a plurality of image-based lesion prediction results (321a, 321b, 321c) provided from the image-based severity learning models (320a, 320b, 320c) ensemble learning (Ensemble learning). Can be performed to construct an image-based integrated lesion prediction result.
구체적으로, 영상기반의 병변예측결과(321a, 321b, 321c)를 영상기반 통합 학습모델(350)의 입력 데이터로서 설정하고, 병변영역 영상에 대응되는 목적 변수를 설정하여 영상기반 통합 학습모델(350)의 학습을 수행함으로써, 복수의 영상기반의 병변예측결과(321a, 321b, 321c)에 대한 가중치를 학습시켜 앙상블 모델을 구축할 수 있다. Specifically, by setting the image-based lesion prediction results 321a, 321b, 321c as input data of the image-based integrated learning model 350, and setting a target variable corresponding to the lesion area image, the image-based integrated learning model 350 ), it is possible to build an ensemble model by learning weights for a plurality of image-based lesion prediction results 321a, 321b, 321c.
이하, 도 4 및 도 5를 참조하여, 임상 데이터 기반 학습부의 구성 및 동작을 좀 더 구체적으로 설명한다.Hereinafter, the configuration and operation of the clinical data-based learning unit will be described in more detail with reference to FIGS. 4 and 5.
우선, 본 개시의 일 실시예에서, 임상 데이터를 사용자의 생체적인 변화를 측정한 생체신호(예, ECG, PPG, EMG 등) 또는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터로 예시하고 있으나, 본 개시가 이를 한정하는 것은 아니며, 임상 데이터의 종류는 다양하게 변경될 수 있다. 또한, 전술한 임상 데이터의 종류에 기초하여, 임상 데이터 기반 학습부의 구성을 예시하지만, 임상 데이터 기반 학습부의 구성은 임상 데이터의 종류에 따라 다양하게 변경될 수 있다.First, in an embodiment of the present disclosure, clinical data is detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid, urine, biopsy, etc. generated by the user's body. Although illustrated as data, the present disclosure is not limited thereto, and types of clinical data may be variously changed. In addition, the configuration of the clinical data-based learning unit is exemplified based on the type of clinical data described above, but the configuration of the clinical data-based learning unit may be variously changed according to the type of clinical data.
이하, 본 개시의 실시예에서 제1임상 데이터 기반 학습부는 사용자의 생체적인 변화를 측정한 생체신호(예, ECG, PPG, EMG 등)를 입력받아 학습모델의 학습을 수행하는 구성부를 예시하며, 제2임상 데이터 기반 학습부는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터를 입력받아 학습모델의 학습을 수행하는 구성부를 예시한다.Hereinafter, in an embodiment of the present disclosure, the first clinical data-based learning unit exemplifies a configuration unit that receives a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change and performs learning of a learning model, The second clinical data-based learning unit exemplifies a component that performs learning of a learning model by receiving data detected from a body fluid, urine, biopsy, etc. generated by a user's body.
도 4는 본 개시의 일 실시예에 따른 병변 통합 학습 장치에 구비된 제1임상 데이터 기반 학습부의 상세 구성을 나타내는 블록도이다.4 is a block diagram illustrating a detailed configuration of a first clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
도 4를 참조하면, 제1임상 데이터 기반 학습부(40)는 입력되는 생체신호(예, ECG, PPG, EMG 등)로부터 노이즈를 제거하는 노이즈 필터부(41), 및 노이즈 제거된 생체신호로부터 학습 또는 검출에 사용할 진단구간을 추출하는 진단구간 추출부(42)를 포함할 수 있다.Referring to FIG. 4, the first clinical data-based learning unit 40 includes a noise filter unit 41 that removes noise from an input biosignal (eg, ECG, PPG, EMG, etc.), and a noise-removed biosignal. It may include a diagnostic section extraction unit 42 for extracting a diagnostic section to be used for learning or detection.
또한, 제1임상 데이터 기반 학습부(40)는 병변신호 학습부(43)를 포함할 수 있다. 병변신호 학습부(43)는 진단구간의 생체신호를 입력 데이터로서 사용하고, 병변 중증도를 목적 변수로서 사용하여 학습하는 제1임상 데이터 기반 학습모델(430-1, 430-2, ...430-n)을 포함할 수 있다.In addition, the first clinical data-based learning unit 40 may include a lesion signal learning unit 43. The lesion signal learning unit 43 is a first clinical data-based learning model (430-1, 430-2, ...430) that learns by using the biosignal of the diagnosis section as input data and using the lesion severity as a target variable. -n) may be included.
제1임상 데이터는 연속적인(sequential) 형태로 구성된 데이터일 수 있으므로, 제1임상 데이터 기반 학습모델(430-1, 430-2, ...430-n)은 RNN(Recurrent Neural Network, 순환신경망) 방식에 기초하여 제1임상 데이터에 대한 학습을 수행할 수 있다.Since the first clinical data may be data configured in a sequential form, the first clinical data-based learning model (430-1, 430-2, ...430-n) is a recurrent neural network (RNN). ), it is possible to perform learning on the first clinical data.
도 5는 본 개시의 일 실시예에 따른 병변 통합 학습 장치에 구비된 제2임상 데이터 기반 학습부의 상세 구성을 나타내는 블록도이다.5 is a block diagram illustrating a detailed configuration of a second clinical data-based learning unit provided in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
제2임상 데이터 기반 학습부(50)는 데이터 정규화부(51) 및 병변데이터 학습부(52)를 포함할 수 있다. The second clinical data-based learning unit 50 may include a data normalization unit 51 and a lesion data learning unit 52.
사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 제2임상 데이터는 다양한 형태로 구성되며, 각각 서로 다른 수치를 나타낼 수 있다. 이에 따라, 데이터 정규화부(51)는 다양한 형태로 구성된 제2임상 데이터에 대한 정규화를 수행할 수 있다.The second clinical data detected from bodily fluids, urine, biopsy, etc. generated by the user's body may be configured in various forms, and each may represent different values. Accordingly, the data normalization unit 51 may normalize the second clinical data configured in various forms.
병변데이터 학습부(52)는 정규화된 제2임상 데이터를 입력 데이터로서 사용하고, 병변 중증도를 목적 변수로서 사용하여 학습하는 제2임상 데이터 기반 학습모델(520)를 포함할 수 있다. 바람직하게, 제2임상 데이터 기반 학습모델(520)은, FFNN(Feed-Foward Neural Nework) 방식에 기초하여 제2임상 데이터에 대한 학습을 수행할 수 있다.The lesion data learning unit 52 may include a second clinical data-based learning model 520 for learning by using the normalized second clinical data as input data and using the lesion severity as a target variable. Preferably, the second clinical data-based learning model 520 may perform learning on the second clinical data based on a Feed-Foward Neural Nework (FFNN) method.
도 6은 본 개시의 일 실시예에 따른 병변 진단 장치의 구성을 나타내는 블록도이다.6 is a block diagram showing the configuration of a lesion diagnosis apparatus according to an embodiment of the present disclosure.
도 6을 참조하면, 병변 진단 장치(60)는 영상 기반 검출부(61), 적어도 하나의 임상 데이터 기반 검출부(63), 및 통합 진단부(65)를 포함할 수 있다.Referring to FIG. 6, the lesion diagnosis apparatus 60 may include an image-based detection unit 61, at least one clinical data-based detection unit 63, and an integrated diagnosis unit 65.
영상 기반 검출부(61)는 의료영상을 입력받고 이에 대응되는 영상기반 병변예측결과를 출력하는 적어도 하나의 영상기반 학습모델(610)을 구비할 수 있으며, 적어도 하나의 영상기반 학습모델(610)을 통해 의료영상에 포함된 특정 영역에서 특정 질병이 발현되는 확율을 병변예측결과로서 출력할 수 있다.The image-based detection unit 61 may include at least one image-based learning model 610 that receives a medical image and outputs an image-based lesion prediction result corresponding thereto, and includes at least one image-based learning model 610. Through this, the probability of developing a specific disease in a specific area included in the medical image can be output as a lesion prediction result.
나아가, 의료영상은 신체 전체 또는 특정 진단 영역을 다양한 방식의 촬영 기법을 통해 촬영한 영상으로서, T2(T2-weighted) 영상, ADC(apparent diffusion coefficients) 영상, STIR 영상, T1 영상, T1 with Agents 영상, FLAIR 등과 같은 파라메트릭 MRI, X-Ray 영상, CT 영상, 등을 포함할 수 있다.Further, medical images are images of the entire body or a specific diagnostic area through various imaging techniques, such as T2 (T2-weighted) images, ADC (apparent diffusion coefficients) images, STIR images, T1 images, and T1 with agents images. , And may include parametric MRI, X-ray images, CT images, such as FLAIR, and the like.
임상 데이터 기반 검출부(63)는 임상 데이터를 입력받고 병변예측결과를 출력하는 임상 데이터 기반 학습모델(630-1, 630-2)을 포함할 수 있다. 여기서, 임상 데이터는 사용자의 생체적인 변화를 측정한 생체신호(예, ECG, PPG, EMG 등) 또는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터를 포함할 수 있다. 이에 기초하여, 임상 데이터 기반 학습모델(630-1, 630-2)은, 생체신호(예, ECG, PPG, EMG 등) 또는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터를 입력받고, 전술한 임상 데이터에 대응하여 특정 질병이 발현되는 확율을 병변예측결과로서 출력하도록 학습된 모델일 수 있다. The clinical data-based detection unit 63 may include clinical data-based learning models 630-1 and 630-2 for receiving clinical data and outputting lesion prediction results. Here, the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like. Based on this, the clinical data-based learning models 630-1 and 630-2 are detected from biosignals (eg, ECG, PPG, EMG, etc.) or body fluids, urine, biopsy, etc. generated by the user's body. The model may be trained to receive data and output a probability of developing a specific disease in response to the above-described clinical data as a lesion prediction result.
전술한 임상 데이터 기반 검출부(63)는 병변 진단 장치(60)에 적어도 하나가 구비될 수 있으며, 각각의 임상 데이터 기반 검출부(63)에는 임상 데이터 기반 학습모델(630-1, 630-2)이 각각 포함될 수 있다. At least one of the above-described clinical data-based detection unit 63 may be provided in the lesion diagnosis device 60, and each clinical data-based detection unit 63 includes clinical data-based learning models 630-1 and 630-2. Each can be included.
나아가, 임상 데이터 기반 검출부(63)는 입력되는 데이터의 종류에 따라 구분되도록 구성될 수 있다. 예컨대, 임상 데이터 기반 검출부(63)는 생체신호(예, ECG, PPG, EMG 등)를 입력으로 하는 제1임상 데이터 기반 검출부(63-1)와, 체액, 소변, 생체검사 등으로부터 검출된 데이터를 입력으로 하는 제2임상 데이터 기반 검출부(63-2)를 포함할 수 있다. Furthermore, the clinical data-based detection unit 63 may be configured to be classified according to the type of input data. For example, the clinical data-based detection unit 63 includes a first clinical data-based detection unit 63-1 that inputs a biological signal (eg, ECG, PPG, EMG, etc.), and data detected from body fluids, urine, biopsy, etc. A second clinical data-based detection unit 63-2 may be included as an input.
임상 데이터 기반 검출부(63)는 입력되는 데이터의 종류를 확인하고, 확인된 종류에 대응되는 검출부(63-1, 63-2)를 선택하여, 해당 임상 데이터를 제공하도록 구성될 수 있다. 다른 예로서, 생체신호(예, ECG, PPG, EMG 등)는 제1임상 데이터 기반 검출부(63-1)로 입력되고, 체액, 소변, 생체검사 등으로부터 검출된 데이터는 제2임상 데이터 기반 검출부(63-2)로 입력될 수 있도록 사용자 인터페이스를 제공할 수도 있다. The clinical data-based detection unit 63 may be configured to check the type of input data, select the detection units 63-1 and 63-2 corresponding to the identified type, and provide the corresponding clinical data. As another example, biological signals (e.g., ECG, PPG, EMG, etc.) are input to the first clinical data-based detection unit 63-1, and data detected from bodily fluids, urine, biopsy, etc. are the second clinical data-based detection unit. It is also possible to provide a user interface so that the input can be entered as (63-2).
한편, 통합 진단부(65)는 영상 기반 검출부(61)와 임상 데이터 기반 검출부(63)로부터 출력되는 병변예측결과를 입력받을 수 있으며, 이에 대응되는 출력으로서 최종 병변예측결과를 확인할 수 있다. 특히, 통합 진단부(65)는 영상 기반 검출부(61)와 임상 데이터 기반 검출부(63)로부터 제공되는 복수의 병변예측결과를 입력받고, 이에 대응되는 최종 병변예측결과를 제공하도록 앙상블 학습(Ensemble learning)된 통합 학습모델(650)을 포함할 수 있다.Meanwhile, the integrated diagnosis unit 65 may receive a lesion prediction result output from the image-based detection unit 61 and the clinical data-based detection unit 63, and confirm the final lesion prediction result as an output corresponding thereto. In particular, the integrated diagnosis unit 65 receives a plurality of lesion prediction results provided from the image-based detection unit 61 and the clinical data-based detection unit 63, and ensemble learning to provide a final lesion prediction result corresponding thereto. ) May include an integrated learning model 650.
영상기반 학습모델(610), 임상 데이터 기반 학습모델(630-1, 630-2), 및 통합 학습모델(650)은 전술한 도 1의 병변 통합 학습 장치(10)에 의해 구축된 학습 모델일 수 있다.The image-based learning model 610, clinical data-based learning models 630-1 and 630-2, and the integrated learning model 650 are learning models constructed by the lesion integrated learning device 10 of FIG. I can.
도 7은 본 개시의 일 실시예에 따른 병변 진단 장치에 구비된 영상 기반 검출부의 상세 구성을 나타내는 블록도이다.7 is a block diagram illustrating a detailed configuration of an image-based detection unit included in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
도 7을 참조하면, 영상기반 검출부(70)는 병변영역 검출부(71), 영상기반 중증도 검출부(72a, 72b, 72c), 및 복수의 영상기반 중증도 검출부(72a, 72b, 72c)로부터 각각 출력되는 영상기반의 병변예측결과를 입력받아 그에 대응되는 영상기반의 통합 병병예측결과를 출력하는 영상기반 통합검출부(75)를 포함할 수 있다. 여기서, 복수의 영상기반 중증도 검출부(72a, 72b, 72c)는 각각 서로 다른 학습 구조로 이루어지는 것이 바람직하다.Referring to FIG. 7, the image-based detection unit 70 is output from the lesion area detection unit 71, the image-based severity detection units 72a, 72b, and 72c, and a plurality of image-based severity detection units 72a, 72b, and 72c, respectively. It may include an image-based integrated detection unit 75 that receives the image-based lesion prediction result and outputs an image-based integrated disease prediction result corresponding thereto. Here, it is preferable that the plurality of image-based severity detection units 72a, 72b, and 72c each have a different learning structure.
각 신체 기관 또는 진단 영역이나, 신체 기관 또는 진단 영역에 존재하는 병변의 특성에 기초하여 의료영상이 선택적으로 사용될 수 있다. 예컨대, 신체 기관 또는 진단 영역이 전립선 영역인 경우, 영상 기반 검출부(70)는 T2(T2-weighted) 영상, ADC(apparent diffusion coefficients) 영상 등을 병변영역 검출부(71)의 입력으로 사용할 수 있다. 다른 예로서, 신체 기관 또는 진단 영역이 간 영역인 경우, 영상 기반 검출부(70)는 STIR 영상, T1 영상, T1 with Agents 영상, T2 영상 등을 병변영역 검출부(71)의 입력으로 사용할 수 있다. 또 다른 예로서, 신체 기관 또는 진단 영역이 뇌 영역인 경우, 영상 기반 검출부(70)는 T1 영상, T2 영상, FLAIR 등을 병변영역 검출부(71)의 입력으로 사용할 수 있다.Medical images may be selectively used based on the characteristics of each body organ or diagnosis region or lesions present in the body organ or diagnosis region. For example, when the body organ or the diagnosis region is the prostate region, the image-based detection unit 70 may use a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, or the like as an input of the lesion region detection unit 71. As another example, when a body organ or diagnosis region is a liver region, the image-based detection unit 70 may use a STIR image, a T1 image, a T1 with Agents image, and a T2 image as an input of the lesion region detection unit 71. As another example, when the body organ or the diagnosis region is a brain region, the image-based detection unit 70 may use a T1 image, a T2 image, or a FLAIR as an input to the lesion region detection unit 71.
이에 기초하여, 병변영역 검출부(71)는 병변영역 검출 학습모델(710)을 구비할 수 있으며, 병변영역 검출 학습모델(710)은 사용자의 진단 영역이 위치한 신체를 촬영한 의료영상(이하, '원본 의료영상'이라 함)을 입력받고, 원본 의료영상으로부터 병변영역을 추출한 의료영상(이하, '병변영역 영상'이라 함)을 검출할 수 있다. 그리고, 병변영역 검출부(71)는 병변영역 영상을 검출하여 영상기반 중증도 검출부(72a, 72b, 72c)의 입력으로 제공할 수 있다.Based on this, the lesion area detection unit 71 may include a lesion area detection learning model 710, and the lesion area detection learning model 710 is a medical image (hereinafter, referred to as' An original medical image') is received, and a medical image obtained by extracting a lesion area from the original medical image (hereinafter, referred to as a'lesion area image') may be detected. In addition, the lesion region detection unit 71 may detect the lesion region image and provide the detection of the lesion region image as an input of the image-based severity detection units 72a, 72b, and 72c.
병변영역 검출부(71)가 원본 의료영상으로부터 병변영역 영상을 검출하는 동작은 합성곱 신경망(Convolutional Neural Network, CNN) 기법 또는 풀링(pooling) 기법 등에 기초하여 수행할 수 있다. 예를 들어, 병변영역 검출부(71)는 원본 의료영상을 입력으로 하고, 병변영역 영상을 출력으로 하는 병변영역 검출 학습모델(710)을 포함할 수 있다. The operation of detecting the lesion area image from the original medical image by the lesion area detection unit 71 may be performed based on a convolutional neural network (CNN) technique or a pooling technique. For example, the lesion region detection unit 71 may include a lesion region detection learning model 710 that receives an original medical image as an input and outputs the lesion region image.
영상기반 중증도 검출부(72a, 72b, 72c)는 영상기반 중증도 학습모델(720a, 720b, 720c)을 포함할 수 있는데, 영상기반 중증도 학습모델(720a, 720b, 720c)은 합성곱 신경망(Convolutional Neural Network, CNN) 기법 또는 풀링(pooling) 기법에 기초하여 구축된 학습 모델일 수 있다. The image-based severity detection units 72a, 72b, 72c may include image-based severity learning models 720a, 720b, and 720c, and the image-based severity learning models 720a, 720b, and 720c are convolutional neural networks. , CNN) technique or a pooling technique.
특히, 영상기반 중증도 학습모델(720a, 720b, 720c)은 서로 다른 구조의 학습모델로 구성될 수 있으며, 동일한 병변영역 영상 입력받더라도 서로 다른 학습 모델에 의해 각각 서로 다른 영상기반의 병변예측결과(721a, 721b, 721c)를 출력할 수 있다. 이렇게 출력된 영상기반의 병변예측결과(721a, 721b, 721c)는 영상기반 통합검출부(75)의 입력으로 제공될 수 있다.In particular, the image-based severity learning models 720a, 720b, and 720c may be composed of learning models having different structures, and even when the same lesion area image is input, different image-based lesion prediction results 721a are respectively used by different learning models. , 721b, 721c) can be output. The image-based lesion prediction results 721a, 721b, and 721c output as described above may be provided as an input of the image-based integrated detection unit 75.
영상기반 통합검출부(75)는 영상기반의 병변예측결과(721a, 721b, 721c)를 입력받고, 영상기반의 통합 병변예측결과를 출력하는 영상기반 통합 학습모델(750)을 포함할 수 있다. The image-based integrated detection unit 75 may include an image-based integrated learning model 750 that receives image-based lesion prediction results 721a, 721b, and 721c and outputs an image-based integrated lesion prediction result.
예컨대, 병변영역 검출 학습모델(710), 영상기반 중증도 학습모델(720a, 720b, 720c), 영상기반 통합 학습모델(750) 등은 전술한 도 3의 영상 기반 학습부(30)에 의해 구축될 수 있다.For example, the lesion area detection learning model 710, the image-based severity learning models 720a, 720b, and 720c, the image-based integrated learning model 750, etc., are to be constructed by the image-based learning unit 30 of FIG. I can.
도 8은 본 개시의 일 실시예에 따른 병변 진단 장치에 구비된 제1임상 데이터 기반 검출부의 상세 구성을 나타내는 블록도이다.8 is a block diagram illustrating a detailed configuration of a first clinical data-based detection unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
도 8을 참조하면, 제1임상 데이터 기반 검출부(80)는 입력되는 생체신호(예, ECG, PPG, EMG 등)로부터 노이즈를 제거하는 노이즈 필터부(81), 및 노이즈 제거된 생체신호로부터 병변 검출에 사용할 진단구간을 추출하는 진단구간 추출부(82)를 포함할 수 있다.Referring to FIG. 8, the first clinical data-based detection unit 80 includes a noise filter unit 81 that removes noise from an input bio-signal (eg, ECG, PPG, EMG, etc.), and a lesion from the noise-removed bio-signal. It may include a diagnostic section extraction unit 82 for extracting a diagnostic section to be used for detection.
또한, 제1임상 데이터 기반 검출부(80)는 병변신호 검출부(83)를 포함할 수 있다. 병변신호 검출부(83)는 진단구간의 생체신호를 입력받고, 병변 중증도를 출력하는 제1임상 데이터 기반 학습모델(830-1, 830-2, ... 830-n)을 포함할 수 있다.In addition, the first clinical data-based detection unit 80 may include a lesion signal detection unit 83. The lesion signal detection unit 83 may include a first clinical data-based learning model 830-1, 830-2, ... 830-n that receives a biosignal of a diagnosis section and outputs a lesion severity.
제1임상 데이터는 연속적인(sequential) 형태로 구성된 데이터일 수 있으므로, 제1임상 데이터 기반 학습모델(830-1, 830-2, ... 830-n)은 RNN(Recurrent Neural Network, 순환신경망) 방식에 기초하여 학습된 모델일 수 있다. Since the first clinical data may be data composed in a sequential form, the first clinical data-based learning model (830-1, 830-2, ... 830-n) is a recurrent neural network (RNN). ) May be a model learned based on the method.
나아가, 제1임상 데이터 기반 학습모델(830-1, 830-2, ... 830-n)은 전술한 도 4의 임상 데이터 기반 학습부에 의해 구축된 모델일 수 있다.Furthermore, the first clinical data-based learning models 830-1, 830-2, ... 830-n may be models constructed by the clinical data-based learning unit of FIG. 4 described above.
도 9는 본 개시의 일 실시예에 따른 병변 진단 장치에 구비된 제2임상 데이터 기반 학습부의 상세 구성을 나타내는 블록도이다.9 is a block diagram showing a detailed configuration of a second clinical data-based learning unit provided in a lesion diagnosis apparatus according to an embodiment of the present disclosure.
제2임상 데이터 기반 학습부(90)는 데이터 정규화부(91) 및 병변데이터 검출부(92)를 포함할 수 있다. The second clinical data-based learning unit 90 may include a data normalization unit 91 and a lesion data detection unit 92.
사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 제2임상 데이터는 다양한 형태로 구성되며, 각각의 형태에 따라 서로 다른 수치를 나타낼 수 있다. 이에 따라, 데이터 정규화부(91)는 다양한 형태로 구성된 제2임상 데이터에 대한 정규화를 수행할 수 있다.The second clinical data detected from bodily fluids, urine, biopsy, etc. generated by the user's body are composed of various types, and different values may be displayed according to each type. Accordingly, the data normalization unit 91 may normalize the second clinical data configured in various forms.
병변데이터 검출부(92)는 정규화된 제2임상 데이터를 입력받고 병변 중증도를 출력하는 제2임상 데이터 기반 학습모델(920)를 포함할 수 있다. 바람직하게, 제2임상 데이터 기반 학습모델(920)은, FFNN(Feed-Foward Neural Nework) 방식에 기초하여 구축된 모델일 수 있으며, 예컨대, 전술한 도 5의 임상 데이터 기반 학습부에 의해 구축된 모델일 수 있다.The lesion data detection unit 92 may include a second clinical data-based learning model 920 that receives normalized second clinical data and outputs a lesion severity. Preferably, the second clinical data-based learning model 920 may be a model built based on the FFNN (Feed-Foward Neural Nework) method, for example, built by the clinical data-based learning unit of FIG. It can be a model.
도 10은 본 개시의 일 실시예에 따른 병변 통합 학습 방법의 순서를 나타내는 흐름도이다.10 is a flowchart illustrating a procedure of a method for learning lesion integration according to an embodiment of the present disclosure.
본 개시의 일 실시예에 따른 병변 통합 학습 방법은 전술한 본 개시의 일 실시예에 따른 병변 통합 학습 장치에 의해 수행될 수 있다.The method for learning integrated lesions according to an embodiment of the present disclosure may be performed by the apparatus for learning integrated lesions according to an embodiment of the present disclosure described above.
우선, S1010 단계에서, 병변 통합 학습 장치는 의료영상을 입력받고 병변예측결과를 출력하는 영상기반 학습모델을 학습할 수 있다. 특히, 병변 통합 학습 장치는 복수의 영상기반 학습모델을 구비할 수 있으며, 의료영상에 포함된 특정 영역에서 특정 질병이 발현되는 확율이 영상기반 학습모델의 목적 변수로서 설정될 수 있다. 여기서, 의료영상은 신체 전체 또는 특정 진단 영역을 다양한 방식의 촬영 기법을 통해 촬영한 영상으로서, T2(T2-weighted) 영상, ADC(apparent diffusion coefficients) 영상, STIR 영상, T1 영상, T1 with Agents 영상, FLAIR 등과 같은 파라메트릭 MRI, X-Ray 영상, CT 영상, 등을 포함할 수 있다.First, in step S1010, the integrated lesion learning apparatus may learn an image-based learning model that receives a medical image and outputs a lesion prediction result. In particular, the integrated lesion learning apparatus may include a plurality of image-based learning models, and a probability of developing a specific disease in a specific region included in a medical image may be set as a target variable of the image-based learning model. Here, the medical image is an image taken of the entire body or a specific diagnostic area through various imaging techniques, and is a T2 (T2-weighted) image, an ADC (apparent diffusion coefficients) image, a STIR image, a T1 image, and a T1 with agents image. And parametric MRI, X-ray images, CT images, such as FLAIR, and the like.
구체적으로, 병변 통합 학습 장치는 사용자의 신체를 촬영한 의료영상(이하, '원본 의료영상'이라 함)를 입력받을 수 있으며, 원본 의료영상으로부터 진단 영역을 추출한 의료영상(이하, '병변영역 영상'이라 함)를 검출할 수 있다. 병변 통합 학습 장치가 원본 의료영상으로부터 병변영역 영상을 추출하는 동작은 합성곱 신경망(Convolutional Neural Network, CNN) 기법 또는 풀링(pooling) 기법 등에 기초하여 수행할 수 있다. 예를 들어, 병변 통합 학습 장치는 원본 의료영상을 입력으로 하고, 병변영역 영상을 출력으로 하는 학습을 통해 병변영역 검출 학습모델을 포함할 수 있다. 그리고, 병변 통합 학습 장치는 원본 의료영상이 입력됨에 따라, 원본 의료영상을 병변영역 검출 학습모델에 입력하고, 병변영역 검출 학습모델을 통해 이에 대응되는 병변영역 영상 검출할 수 있다.Specifically, the integrated lesion learning apparatus may receive a medical image (hereinafter referred to as'original medical image') photographing a user's body, and a medical image obtained by extracting a diagnosis area from the original medical image (hereinafter, referred to as'lesion area image). ') can be detected. The operation of extracting the lesion region image from the original medical image by the integrated lesion learning apparatus may be performed based on a convolutional neural network (CNN) technique or a pooling technique. For example, the integrated lesion learning apparatus may include a lesion region detection learning model through learning that takes an original medical image as an input and outputs a lesion region image. In addition, as the original medical image is input, the lesion integrated learning apparatus may input the original medical image into the lesion region detection learning model and detect a lesion region image corresponding thereto through the lesion region detection learning model.
나아가, 각 신체 기관 또는 진단 영역이나, 신체 기관 또는 진단 영역에 존재하는 병변의 특성에 기초하여 의료영상이 선택적으로 사용될 수 있다. Furthermore, medical images may be selectively used based on the characteristics of each body organ or diagnosis region, or lesions present in the body organ or diagnosis region.
예컨대, 신체 기관 또는 진단 영역이 전립선 영역인 경우, 병변 통합 학습 장치는 T2(T2-weighted) 영상, ADC(apparent diffusion coefficients) 영상 등을 병변영역 검출 학습모델의 입력으로 사용할 수 있다. 다른 예로서, 신체 기관 또는 진단 영역이 간 영역인 경우, 병변 통합 학습 장치는 STIR 영상, T1 영상, T1 with Agents 영상, T2 영상 등을 병변영역 검출 학습모델의 입력으로 사용할 수 있다. 또 다른 예로서, 신체 기관 또는 진단 영역이 뇌 영역인 경우, 병변 통합 학습 장치는 T1 영상, T2 영상, FLAIR 등을 병변영역 검출 학습모델의 입력으로 사용할 수 있다.For example, when a body organ or a diagnosis region is a prostate region, the integrated lesion learning apparatus may use a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, or the like as an input of the lesion region detection learning model. As another example, when a body organ or a diagnosis region is a liver region, the apparatus for integrated lesion learning may use an STIR image, a T1 image, a T1 with Agents image, and a T2 image as an input of the lesion region detection learning model. As another example, when a body organ or a diagnosis region is a brain region, the integrated lesion learning apparatus may use a T1 image, a T2 image, or a FLAIR as an input of a lesion region detection learning model.
한편, 병변 통합 학습 장치는 영상기반 중증도 학습모델을 학습할 수 있는데, 학습을 수행하기 위한 분석 대상 영상으로서 병변영역 영상 입력받을 수 있으며, 병변영역 영상 포함된 특정 객체 또는 특정 영역에 대한 중증도를 라벨링(Labeling)함으로써, 영상기반 중증도 학습모델의 학습을 수행할 수 있다. On the other hand, the integrated lesion learning device can learn an image-based severity learning model, which can receive a lesion region image as an image to be analyzed for performing the learning, and label the severity of a specific object or specific region included in the lesion region image. By (Labeling), learning of an image-based severity learning model can be performed.
여기서, 영상기반 중증도 학습모델은 합성곱 신경망(Convolutional Neural Network, CNN) 기법 또는 풀링(pooling) 기법에 기초하여 학습될 수 있다. 구체적으로, 영상기반 중증도 학습모델은 입력 영상을 분석하여 영상의 특징을 추출할 수 있다. 상기 특징은 영상의 각 영역마다의 국소적인 특징일 수 있다. 영상기반 중증도 학습모델은 일반적인 합성곱 신경망(Convolutional Neural Network, CNN) 기법 또는 풀링(pooling) 기법을 이용하여 입력 영상의 특징을 추출할 수 있다. 상기 풀링 기법은 맥스(max) 풀링 기법 및 평균(average) 풀링 기법 중 적어도 하나를 포함할 수 있다. 그러나, 본 개시에서 언급되는 풀링 기법은 맥스 풀링 기법 또는 평균 풀링 기법에 한정되지 않으며, 소정 크기의 영상 영역의 대표값을 획득하는 임의의 기법을 포함한다. 예컨대, 풀링 기법에 사용되는 대표값은 최대값 및 평균값 외에, 분산값, 표준 편차값, 중간값(mean value), 최빈값(most frequent value), 최소값, 가중 평균값 등 중 적어도 하나일 수 있다.Here, the image-based severity learning model may be learned based on a convolutional neural network (CNN) technique or a pooling technique. Specifically, the image-based severity learning model may extract features of an image by analyzing an input image. The feature may be a local feature for each area of the image. The image-based severity learning model can extract features of an input image using a general convolutional neural network (CNN) technique or a pooling technique. The pooling technique may include at least one of a max pooling technique and an average pooling technique. However, the pooling technique referred to in the present disclosure is not limited to the max pooling technique or the average pooling technique, and includes any technique for obtaining a representative value of an image region of a predetermined size. For example, the representative value used in the pooling technique may be at least one of a variance value, a standard deviation value, a mean value, a most frequent value, a minimum value, and a weighted average value, in addition to the maximum value and the average value.
본 개시의 합성곱 신경망은 입력 데이터(영상)로부터 테두리, 선 색 등과 같은 "특징들(features)"을 추출하기 위해 이용될 수 있으며, 복수의 계층들(layers)을 포함할 수 있다. 각각의 계층은 입력 데이터를 수신하고, 해당 계층의 입력 데이터를 처리하여 출력 데이터를 생성할 수 있다. 합성곱 신경망은 입력된 영상 또는 입력된 특징맵(feature map)을 필터 커널들(filter kernels)과 컨볼루션하여 생성한 특징맵을 출력 데이터로서 출력할 수 있다. 합성곱 신경망의 초기 계층들은 입력으로부터 에지들 또는 그레디언트들과 같은 낮은 레벨의 특징들을 추출하도록 동작될 수 있다. 신경망의 다음 계층들은 눈, 코 등과 같은 점진적으로 더 복잡한 특징들을 추출할 수 있다. The convolutional neural network of the present disclosure may be used to extract “features” such as a border, a line color, and the like from input data (image), and may include a plurality of layers. Each layer may receive input data and may generate output data by processing the input data of the layer. The convolutional neural network may output an input image or a feature map generated by convolving an input feature map with filter kernels as output data. The initial layers of the convolutional neural network can be operated to extract low-level features such as edges or gradients from the input. The next layers of the neural network can gradually extract more complex features such as eyes, nose, etc.
합성곱 신경망은 컨볼루션 연산이 수행되는 합성곱 계층 외에도 풀링 연산이 수행되는 풀링 계층도 포함할 수 있다. 풀링 기법은 풀링 계층에서 데이터의 공간적 크기를 축소하는데 사용되는 기법이다. 구체적으로, 풀링 기법에는 해당 영역에서 최대값을 선택하는 맥스 풀링(max pooling) 기법과 해당 영역의 평균값을 선택하는 평균 풀링(average pooling) 기법이 있으며, 이미지 인식 분야에서는 일반적으로 맥스 풀링 기법이 사용된다. 풀링 기법에서는 일반적으로 풀링의 윈도우 크기와 간격(스트라이드, stride)을 같은 값으로 설정한다. 여기서, 스트라이드란 입력 데이터에 필터를 적용할 때 이동할 간격을 조절하는 것, 즉 필터가 이동할 간격을 의미하며, 스트라이드 또한 출력 데이터의 크기를 조절하기 위해 사용될 수 있다. The convolutional neural network may include a convolutional layer in which a convolution operation is performed, as well as a pooling layer in which a pooling operation is performed. The pooling technique is a technique used to reduce the spatial size of data in the pooling layer. Specifically, the pooling technique includes a max pooling technique that selects a maximum value in a corresponding domain and an average pooling technique that selects an average value of the domain. In the image recognition field, the max pooling technique is generally used. do. In the pooling technique, in general, the window size and interval (stride) of the pooling are set to the same value. Here, the stride refers to an interval to be moved when a filter is applied to the input data, that is, an interval to which the filter is moved, and the stride may also be used to adjust the size of the output data.
특히, 적어도 하나의 영상기반 중증도 학습모델은 서로 다른 구조의 학습모델이 구축되도록 구성될 수 있다. 그리고, 병변 통합 학습 장치는 영상기반 중증도 학습모델로부터 제공되는 적어도 하나의 영상기반의 병변예측결과를 대상으로 앙상블 학습(Ensemble learning)을 수행하여 영상기반의 통합 병변예측결과를 구성하는 학습모델을 구축할 수 있다.In particular, at least one image-based severity learning model may be configured such that learning models having different structures are constructed. And, the integrated lesion learning device builds a learning model that constructs the image-based integrated lesion prediction result by performing ensemble learning on at least one image-based lesion prediction result provided from the image-based severity learning model. can do.
구체적으로, 영상기반의 병변예측결과를 영상기반 통합 학습모델의 입력 데이터로서 설정하고, 병변영역 영상 대응되는 목적 변수를 설정하여 영상기반 통합 학습모델의 학습을 수행함으로써, 복수의 영상기반의 병변예측결과에 대한 가중치를 학습시켜 앙상블 모델을 구축할 수 있다. Specifically, by setting the image-based lesion prediction result as input data of the image-based integrated learning model, and by setting the objective variable corresponding to the lesion area image to perform learning of the image-based integrated learning model, multiple image-based lesion prediction An ensemble model can be built by learning the weights for the results.
한편, S1020 단계에서, 병변 통합 학습 장치는 임상 데이터를 입력받고, 병변예측결과를 출력하는 적어도 하나의 임상 데이터 기반 학습모델을 학습할 수 있다. 여기서, 임상 데이터는 사용자의 생체적인 변화를 측정한 생체신호(예, ECG, PPG, EMG 등) 또는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터를 포함할 수 있다. 이에 기초하여, 임상 데이터 기반 학습모델은, 생체신호(예, ECG, PPG, EMG 등) 또는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터를 입력받고, 전술한 임상 데이터에 대응하여 특정 질병이 발현되는 확율이 임상 데이터 기반 학습모델의 목적 변수로서 설정될 수 있다. 이에 따라, 임상 데이터 기반 학습모델은 전술한 임상 데이터에 대응되는 병변예측결과를 출력하도록 학습될 수 있다.Meanwhile, in step S1020, the integrated lesion learning apparatus may learn at least one clinical data-based learning model that receives clinical data and outputs a lesion prediction result. Here, the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like. Based on this, the clinical data-based learning model receives data detected from biological signals (e.g., ECG, PPG, EMG, etc.) or body fluids, urine, biopsy, etc. generated by the user's body, and the above-described clinical data Corresponding to, the probability of developing a specific disease may be set as an objective variable of a clinical data-based learning model. Accordingly, the clinical data-based learning model may be trained to output a lesion prediction result corresponding to the above-described clinical data.
본 개시의 일 실시예에서, 임상 데이터를 사용자의 생체적인 변화를 측정한 생체신호(예, ECG, PPG, EMG 등) 또는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터로 예시하고 있으나, 본 개시가 이를 한정하는 것은 아니며, 임상 데이터의 종류는 다양하게 변경될 수 있다. 또한, 전술한 임상 데이터의 종류에 기초하여, 임상 데이터 기반 학습부의 구성을 예시하지만, 임상 데이터 기반 학습부의 구성은 임상 데이터의 종류에 따라 다양하게 변경될 수 있다.In one embodiment of the present disclosure, clinical data is data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change, or a body fluid generated by the user's body, urine, biopsy, etc. However, the present disclosure is not limited thereto, and the type of clinical data may be variously changed. In addition, the configuration of the clinical data-based learning unit is exemplified based on the type of clinical data described above, but the configuration of the clinical data-based learning unit may be variously changed according to the type of clinical data.
이하, 본 개시의 실시예에서 제1임상 데이터는 사용자의 생체적인 변화를 측정한 생체신호(예, ECG, PPG, EMG 등)임을 예시하고, 제2임상 데이터는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터임을 예시하며, 이에 기초하여 적어도 하나의 임상 데이터 기반 학습모델을 학습하는 동작을 좀 더 구체적으로 예시한다.Hereinafter, in an embodiment of the present disclosure, the first clinical data is an example of a biological signal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change, and the second clinical data is a bodily fluid generated by the user's body. , Urine, biopsy, and the like, and the operation of learning at least one clinical data-based learning model based on this is illustrated in more detail.
우선, 병변 통합 학습 장치는 제1임상 데이터에 대한 학습을 수행하는 제1임상 데이터 기반 학습모델을 구축할 수 있다. 이를 위해, 병변 통합 학습 장치는 제1임상 데이터, 즉, 생체신호(예, ECG, PPG, EMG 등)로부터 노이즈를 제거하고, 노이즈 제거된 생체신호로부터 학습 또는 검출에 사용할 진단구간을 추출할 수 있다. First, the integrated lesion learning apparatus may build a learning model based on the first clinical data that learns the first clinical data. To this end, the integrated lesion learning device can remove noise from the first clinical data, that is, bio-signals (e.g., ECG, PPG, EMG, etc.), and extract a diagnostic section to be used for learning or detection from the noise-removed bio-signals. have.
그리고, 병변 통합 학습 장치는 진단구간의 생체신호를 입력 데이터로서 사용하고, 병변 중증도를 목적 변수로서 사용하여 학습하는 제1임상 데이터 기반 학습모델을 구축할 수 있다. 제1임상 데이터는 연속적인(sequential) 형태로 구성된 데이터일 수 있으므로, 제1임상 데이터 기반 학습모델은 RNN(Recurrent Neural Network, 순환신경망) 방식에 기초하여 제1임상 데이터에 대한 학습을 수행할 수 있다.In addition, the integrated lesion learning apparatus may construct a first clinical data-based learning model that learns by using the biosignal of the diagnosis section as input data and the lesion severity as a target variable. Since the first clinical data may be data configured in a sequential form, the first clinical data-based learning model can perform learning on the first clinical data based on the RNN (Recurrent Neural Network) method. have.
또한, 병변 통합 학습 장치는 제2임상 데이터에 대한 학습을 수행하는 제2임상 데이터 기반 학습모델을 구축할 수 있다. In addition, the integrated lesion learning apparatus may build a second clinical data-based learning model that performs learning on the second clinical data.
제2임상 데이터, 즉, 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 제2임상 데이터는 다양한 형태로 구성되며, 각각 서로 다른 수치를 나타낼 수 있다. 이에 따라, 병변 통합 학습 장치는 다양한 형태로 구성된 제2임상 데이터에 대한 정규화를 수행할 수 있다.The second clinical data, that is, the second clinical data detected from the body fluid generated by the user's body, urine, biopsy, etc., are configured in various forms, and may each represent different values. Accordingly, the integrated lesion learning apparatus may normalize the second clinical data configured in various forms.
이후, 병변 통합 학습 장치는 정규화된 제2임상 데이터를 입력 데이터로서 사용하고, 병변 중증도를 목적 변수로서 사용하여 학습하는 제2임상 데이터 기반 학습모델에 대한 학습을 수행할 수 있다. 바람직하게, 제2임상 데이터 기반 학습모델은, FFNN(Feed-Foward Neural Nework) 방식에 기초하여 학습될 수 있다.Thereafter, the integrated lesion learning apparatus may perform learning on a learning model based on the second clinical data that learns by using the normalized second clinical data as input data and using the lesion severity as a target variable. Preferably, the second clinical data-based learning model may be learned based on a Feed-Foward Neural Nework (FFNN) method.
한편, S1030 단계에서, 병변 통합 학습 장치는 S1010 단계와 S1020 단계에서 제공되는 병변예측결과를 입력받을 수 있으며, 이에 대응되는 출력으로서 최종 병변예측결과를 학습하는 통합 학습모델을 구축할 수 있다. 특히, 통합 학습모델은 S1010 단계와 S1020 단계에서 제공되는 복수의 병변예측결과를 입력 데이터로서 사용하고, 영상 기반 학습모델과 임상 데이터 기반 학습모델의 학습 시 설정하였던 목적 변수와 동일한 목적 변수를 출력 데이터로 설정할 수 있다. 이에 따라, 통합 학습모델은 영상 기반 학습모델과 임상 데이터 기반 학습모델의 출력 간 가중치를 학습시켜 앙상블 모델을 구축할 수 있다. Meanwhile, in step S1030, the integrated lesion learning apparatus may receive the lesion prediction result provided in steps S1010 and S1020, and as an output corresponding thereto, an integrated learning model for learning the final lesion prediction result may be constructed. In particular, the integrated learning model uses a plurality of lesion prediction results provided in steps S1010 and S1020 as input data, and outputs the same target variables as those set when learning the image-based learning model and the clinical data-based learning model. Can be set to Accordingly, the integrated learning model can build an ensemble model by learning weights between the outputs of the image-based learning model and the clinical data-based learning model.
도 11은 본 개시의 일 실시예에 따른 병변 진단 방법의 순서를 나타내는 흐름도이다. 11 is a flowchart illustrating a procedure of a method for diagnosing a lesion according to an embodiment of the present disclosure.
본 개시의 일 실시예에 따른 병변 진단 방법은 전술한 병변 진단 장치에 의해 수행될 수 있다. A method for diagnosing a lesion according to an embodiment of the present disclosure may be performed by the apparatus for diagnosing a lesion.
도 11을 참조하면, S1110 단계에서, 병변 진단 장치는 의료영상을 입력받고 이에 대응되는 영상기반 병변예측결과를 출력할 수 있다. 이때, 병변 진단 장치는 적어도 하나의 영상기반 학습모델을 사용하여 의료영상에 포함된 특정 영역에서 특정 질병이 발현되는 확율을 영상기반 병변예측결과로서 출력할 수 있다. 여기서, 의료영상은 신체 전체 또는 특정 진단 영역을 다양한 방식의 촬영 기법을 통해 촬영한 영상으로서, T2(T2-weighted) 영상, ADC(apparent diffusion coefficients) 영상, STIR 영상, T1 영상, T1 with Agents 영상, FLAIR 등과 같은 파라메트릭 MRI, X-Ray 영상, CT 영상, 등을 포함할 수 있다.Referring to FIG. 11, in step S1110, the apparatus for diagnosing a lesion may receive a medical image and output an image-based lesion prediction result corresponding thereto. In this case, the apparatus for diagnosing lesions may output a probability of developing a specific disease in a specific region included in the medical image as an image-based lesion prediction result using at least one image-based learning model. Here, the medical image is an image taken of the entire body or a specific diagnostic area through various methods of imaging, and includes a T2 (T2-weighted) image, an ADC (apparent diffusion coefficients) image, a STIR image, a T1 image, and a T1 with agents image. , And may include parametric MRI, X-ray images, CT images, such as FLAIR, and the like.
S1110 단계의 동작을 좀 더 구체적으로 살펴보면, 우선, 병변 진단 장치는 병변영역 검출 학습모델을 구비할 수 있으며, 병변영역 검출 학습모델은 사용자의 진단 영역이 위치한 신체를 촬영한 의료영상(이하, '원본 의료영상'이라 함)을 입력받고, 원본 의료영상으로부터 병변영역을 추출한 의료영상(이하, '병변영역 영상'이라 함)을 검출할 수 있다. Looking at the operation of step S1110 in more detail, first of all, the lesion diagnosis apparatus may have a lesion area detection learning model, and the lesion area detection learning model is a medical image (hereinafter, referred to as' An original medical image') is received, and a medical image obtained by extracting a lesion area from the original medical image (hereinafter, referred to as a'lesion area image') may be detected.
나아가, 신체 기관 또는 진단 영역이나, 신체 기관 또는 진단 영역에 존재하는 병변의 특성에 기초하여 의료영상이 선택적으로 사용될 수 있다. 예컨대, 신체 기관 또는 진단 영역이 전립선 영역인 경우, 병변 진단 장치는 T2(T2-weighted) 영상, ADC(apparent diffusion coefficients) 영상 등을 적어도 하나의 영상기반 학습모델의 입력으로 선택할 수 있다. 다른 예로서, 신체 기관 또는 진단 영역이 간 영역인 경우, 병변 진단 장치는 STIR 영상, T1 영상, T1 with Agents 영상, T2 영상 등을 적어도 하나의 영상기반 학습모델의 입력으로 선택할 수 있다. 또 다른 예로서, 신체 기관 또는 진단 영역이 뇌 영역인 경우, 병변 진단 장치는 T1 영상, T2 영상, FLAIR 등을 적어도 하나의 영상기반 학습모델의 입력으로 선택할 수 있다.Further, a medical image may be selectively used based on the characteristics of a body organ or a diagnosis region, or a lesion existing in a body organ or diagnosis region. For example, when the body organ or the diagnosis region is the prostate region, the lesion diagnosis apparatus may select a T2-weighted (T2) image, an apparent diffusion coefficients (ADC) image, or the like as an input of at least one image-based learning model. As another example, when the body organ or the diagnosis region is the liver region, the lesion diagnosis apparatus may select a STIR image, a T1 image, a T1 with Agents image, and a T2 image as inputs of at least one image-based learning model. As another example, when a body organ or a diagnosis region is a brain region, the lesion diagnosis apparatus may select a T1 image, a T2 image, or a FLAIR as an input of at least one image-based learning model.
병변 진단 장치가 원본 의료영상으로부터 병변영역 영상을 검출하는 동작은 합성곱 신경망(Convolutional Neural Network, CNN) 기법 또는 풀링(pooling) 기법 등에 기초하여 수행할 수 있다. 예를 들어, 병변 진단 장치는 영상기반 중증도 학습모델을 포함할 수 있는데, 영상기반 중증도 학습모델은 합성곱 신경망(Convolutional Neural Network, CNN) 기법 또는 풀링(pooling) 기법에 기초하여 구축된 학습 모델일 수 있다. The operation of detecting the lesion region image from the original medical image by the lesion diagnosis apparatus may be performed based on a convolutional neural network (CNN) technique or a pooling technique. For example, the lesion diagnosis apparatus may include an image-based severity learning model, and the image-based severity learning model is a learning model built based on a convolutional neural network (CNN) technique or a pooling technique. I can.
특히, 영상기반 중증도 학습모델은 서로 다른 구조의 학습모델로 구성될 수 있으며, 동일한 병변영역 영상 입력받더라도 서로 다른 학습 모델에 의해 각각 서로 다른 영상기반의 병변예측결과를 출력할 수 있다. In particular, the image-based severity learning model may consist of learning models of different structures, and even when the same lesion area image is input, different image-based lesion prediction results can be output by different learning models.
병변 진단 장치는 영상기반의 병변예측결과를 입력받고, 영상기반의 병변예측결과를 출력하는 영상기반 통합 학습모델을 더 포함할 수 있으며, 영상기반 통합 학습모델을 통해 영상기반의 통합 병변예측결과를 산출할 수 있다. The lesion diagnosis apparatus may further include an image-based integrated learning model that receives the image-based lesion prediction result and outputs the image-based lesion prediction result, and provides the image-based integrated lesion prediction result through the image-based integrated learning model. Can be calculated.
S1120 단계에서, 병변 진단 장치는 적어도 하나의 임상 데이터를 입력받고 이에 각각 대응되는 적어도 하나의 병변예측결과를 출력할 수 있다. 여기서, 임상 데이터는 사용자의 생체적인 변화를 측정한 생체신호(예, ECG, PPG, EMG 등) 또는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터를 포함할 수 있다. 이에 기초하여, 병변 진단 장치는, 생체신호(예, ECG, PPG, EMG 등) 또는 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터를 입력받고, 전술한 임상 데이터에 대응하여 특정 질병이 발현되는 확율을 병변예측결과로서 출력하도록 학습된 모델을 포함할 수 있다. In step S1120, the apparatus for diagnosing lesions may receive at least one clinical data and output at least one lesion prediction result corresponding thereto. Here, the clinical data may include data detected from a biosignal (eg, ECG, PPG, EMG, etc.) that measures a user's biological change or a body fluid generated by the user's body, urine, biopsy, and the like. Based on this, the lesion diagnosis apparatus receives data detected from a biological signal (e.g., ECG, PPG, EMG, etc.) or a body fluid generated by the user's body, urine, biopsy, etc., and responds to the aforementioned clinical data. Thus, a model trained to output the probability of developing a specific disease as a lesion prediction result may be included.
이러한 임상 데이터 기반의 학습 모델은 입력되는 데이터의 종류에 따라 구분되도록 구성될 수 있으며, 병변 진단 장치는 입력되는 데이터의 종류를 구분하여 임상 데이터 기반의 학습 모델에 제공할 수 있도록 동작할 수 있다.The clinical data-based learning model may be configured to be classified according to the type of input data, and the lesion diagnosis apparatus may operate to classify the type of input data and provide it to a clinical data-based learning model.
구체적으로, 병변 진단 장치는 제1임상 데이터로서 입력되는 생체신호(예, ECG, PPG, EMG 등)로부터 노이즈를 제거한 후, 노이즈 제거된 생체신호로부터 병변 검출에 사용할 진단구간을 추출할 수 있다. 그리고, 병변 진단 장치는 추출된 진단구간의 생체신호를 입력받고, 병변 중증도를 출력하는 제1임상 데이터 기반 학습모델(830-1, 830-2, ... 830-n)을 포함할 수 있으며, 이를 통해 제1임상 데이터 기반의 병변예측결과를 출력할 수 있다. Specifically, the lesion diagnosis apparatus may remove noise from a biological signal (eg, ECG, PPG, EMG, etc.) input as first clinical data, and then extract a diagnostic section to be used for lesion detection from the noise-removed biological signal. In addition, the lesion diagnosis apparatus may include a first clinical data-based learning model (830-1, 830-2, ... 830-n) that receives a biosignal of the extracted diagnosis section and outputs a lesion severity. , Through this, a lesion prediction result based on the first clinical data can be output.
제1임상 데이터는 연속적인(sequential) 형태로 구성된 데이터일 수 있으므로, 제1임상 데이터 기반 학습모델은 RNN(Recurrent Neural Network, 순환신경망) 방식에 기초하여 학습된 모델일 수 있다. Since the first clinical data may be data configured in a sequential form, the first clinical data-based learning model may be a model trained based on a recurrent neural network (RNN) method.
한편, 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 제2임상 데이터는 다양한 형태로 구성되며, 각각의 형태에 따라 서로 다른 수치를 나타낼 수 있다. 이에 따라, 또한, 병변 진단 장치는 제2임상 데이터로서 입력되는 데이터, 즉, 사용자의 신체에 의해 생성되는 체액, 소변, 생체검사 등으로부터 검출된 데이터에 대한 정규화를 수행할 수 있다. 이후, 병변 진단 장치는 정규화된 제2임상 데이터를 입력받고 병변 중증도를 출력하는 제2임상 데이터 기반 학습모델을 통해 제2임상 데이터 기반의 병변예측결과를 출력할 수 있다. 이때, 제2임상 데이터 기반 학습모델은, FFNN(Feed-Foward Neural Nework) 방식에 기초하여 구축된 모델일 수 있다.Meanwhile, the second clinical data detected from bodily fluids, urine, biopsy, etc. generated by the user's body may be configured in various forms, and different values may be represented according to each form. Accordingly, the lesion diagnosis apparatus may normalize data input as second clinical data, that is, data detected from a body fluid generated by a user's body, urine, biopsy, and the like. Thereafter, the lesion diagnosis apparatus may output a lesion prediction result based on the second clinical data through a second clinical data-based learning model that receives the normalized second clinical data and outputs a lesion severity. In this case, the second clinical data-based learning model may be a model constructed based on a feed-forward neural network (FFNN) method.
S1130 단계에서, 병변 진단 장치는 영상 기반 병변예측결과, 제1임상 데이터 기반의 병변예측결과, 및 제2임상 데이터 기반의 병변예측결과 등을 조합하여 최종 병변예측결과를 산출할 수 있다. 최종 병변예측결과의 산출은 앙상블 학습(Ensemble learning)을 통해 구축된 통합 학습모델을 통해 이루어질 수 있다. 통합 학습모델은 전술한 도 1의 병변 통합 학습 장치(10)에 의해 구축된 학습 모델일 수 있다.In step S1130, the lesion diagnosis apparatus may calculate a final lesion prediction result by combining an image-based lesion prediction result, a lesion prediction result based on the first clinical data, and a lesion prediction result based on the second clinical data. The final lesion prediction result may be calculated through an integrated learning model built through ensemble learning. The integrated learning model may be a learning model constructed by the lesion integrated learning apparatus 10 of FIG. 1 described above.
도 12는 본 개시의 일 실시예에 따른 병변 통합 학습 방법 및 장치와, 병변 진단 방법 및 장치를 실행하는 컴퓨팅 시스템을 예시하는 블록도이다. 12 is a block diagram illustrating a method and apparatus for integrated learning of a lesion and a computing system for executing the method and apparatus for diagnosing a lesion according to an embodiment of the present disclosure.
도 12를 참조하면, 컴퓨팅 시스템(1000)은 버스(1200)를 통해 연결되는 적어도 하나의 프로세서(1100), 메모리(1300), 사용자 인터페이스 입력 장치(1400), 사용자 인터페이스 출력 장치(1500), 스토리지(1600), 및 네트워크 인터페이스(1700)를 포함할 수 있다.Referring to FIG. 12, the computing system 1000 includes at least one processor 1100 connected through a bus 1200, a memory 1300, a user interface input device 1400, a user interface output device 1500, and a storage device. (1600), and a network interface (1700).
프로세서(1100)는 중앙 처리 장치(CPU) 또는 메모리(1300) 및/또는 스토리지(1600)에 저장된 명령어들에 대한 처리를 실행하는 반도체 장치일 수 있다. 메모리(1300) 및 스토리지(1600)는 다양한 종류의 휘발성 또는 불휘발성 저장 매체를 포함할 수 있다. 예를 들어, 메모리(1300)는 ROM(Read Only Memory) 및 RAM(Random Access Memory)을 포함할 수 있다. The processor 1100 may be a central processing unit (CPU) or a semiconductor device that processes instructions stored in the memory 1300 and/or the storage 1600. The memory 1300 and the storage 1600 may include various types of volatile or nonvolatile storage media. For example, the memory 1300 may include read only memory (ROM) and random access memory (RAM).
따라서, 본 명세서에 개시된 실시예들과 관련하여 설명된 방법 또는 알고리즘의 단계는 프로세서(1100)에 의해 실행되는 하드웨어, 소프트웨어 모듈, 또는 그 2 개의 결합으로 직접 구현될 수 있다. 소프트웨어 모듈은 RAM 메모리, 플래시 메모리, ROM 메모리, EPROM 메모리, EEPROM 메모리, 레지스터, 하드 디스크, 착탈형 디스크, CD-ROM과 같은 저장 매체(즉, 메모리(1300) 및/또는 스토리지(1600))에 상주할 수도 있다. 예시적인 저장 매체는 프로세서(1100)에 커플링되며, 그 프로세서(1100)는 저장 매체로부터 정보를 판독할 수 있고 저장 매체에 정보를 기입할 수 있다. 다른 방법으로, 저장 매체는 프로세서(1100)와 일체형일 수도 있다. 프로세서 및 저장 매체는 주문형 집적회로(ASIC) 내에 상주할 수도 있다. ASIC는 사용자 단말기 내에 상주할 수도 있다. 다른 방법으로, 프로세서 및 저장 매체는 사용자 단말기 내에 개별 컴포넌트로서 상주할 수도 있다.Accordingly, the steps of the method or algorithm described in connection with the embodiments disclosed herein may be directly implemented in hardware executed by the processor 1100, a software module, or a combination of the two. Software modules reside in storage media (i.e., memory 1300 and/or storage 1600) such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM. You may. An exemplary storage medium is coupled to the processor 1100, which is capable of reading information from and writing information to the storage medium. Alternatively, the storage medium may be integral with the processor 1100. The processor and storage media may reside within an application specific integrated circuit (ASIC). The ASIC may reside within the user terminal. Alternatively, the processor and storage medium may reside as separate components within the user terminal.
본 개시의 예시적인 방법들은 설명의 명확성을 위해서 동작의 시리즈로 표현되어 있지만, 이는 단계가 수행되는 순서를 제한하기 위한 것은 아니며, 필요한 경우에는 각각의 단계가 동시에 또는 상이한 순서로 수행될 수도 있다. 본 개시에 따른 방법을 구현하기 위해서, 예시하는 단계에 추가적으로 다른 단계를 포함하거나, 일부의 단계를 제외하고 나머지 단계를 포함하거나, 또는 일부의 단계를 제외하고 추가적인 다른 단계를 포함할 수도 있다.The exemplary methods of the present disclosure are expressed as a series of operations for clarity of description, but this is not intended to limit the order in which steps are performed, and each step may be performed simultaneously or in a different order if necessary. In order to implement the method according to the present disclosure, the exemplary steps may include additional steps, other steps may be included excluding some steps, or may include additional other steps excluding some steps.
본 개시의 다양한 실시예는 모든 가능한 조합을 나열한 것이 아니고 본 개시의 대표적인 양상을 설명하기 위한 것이며, 다양한 실시예에서 설명하는 사항들은 독립적으로 적용되거나 또는 둘 이상의 조합으로 적용될 수도 있다.Various embodiments of the present disclosure are not intended to list all possible combinations, but to describe representative aspects of the present disclosure, and matters described in the various embodiments may be applied independently or may be applied in combination of two or more.
또한, 본 개시의 다양한 실시예는 하드웨어, 펌웨어(firmware), 소프트웨어, 또는 그들의 결합 등에 의해 구현될 수 있다. 하드웨어에 의한 구현의 경우, 하나 또는 그 이상의 ASICs(Application Specific Integrated Circuits), DSPs(Digital Signal Processors), DSPDs(Digital Signal Processing Devices), PLDs(Programmable Logic Devices), FPGAs(Field Programmable Gate Arrays), 범용 프로세서(general processor), 컨트롤러, 마이크로 컨트롤러, 마이크로 프로세서 등에 의해 구현될 수 있다. In addition, various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof. For implementation by hardware, one or more ASICs (Application Specific Integrated Circuits), DSPs (Digital Signal Processors), DSPDs (Digital Signal Processing Devices), PLDs (Programmable Logic Devices), FPGAs (Field Programmable Gate Arrays), general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, or the like.
본 개시의 범위는 다양한 실시예의 방법에 따른 동작이 장치 또는 컴퓨터 상에서 실행되도록 하는 소프트웨어 또는 머신-실행가능한 명령들(예를 들어, 운영체제, 애플리케이션, 펌웨어(firmware), 프로그램 등), 및 이러한 소프트웨어 또는 명령 등이 저장되어 장치 또는 컴퓨터 상에서 실행 가능한 비-일시적 컴퓨터-판독가능 매체(non-transitory computer-readable medium)를 포함한다. The scope of the present disclosure is software or machine-executable instructions (e.g., operating systems, applications, firmware, programs, etc.) that cause an operation according to the method of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium (non-transitory computer-readable medium) which stores instructions and the like and is executable on a device or a computer.

Claims (20)

  1. 병변 검출을 위한 학습모델을 학습하는 장치에 있어서,In the device for learning a learning model for lesion detection,
    의료영상을 입력받고 영상기반 병변예측결과를 출력하는 적어도 하나의 영상기반 학습모델을 학습하는 영상기반 학습부와,An image-based learning unit for learning at least one image-based learning model that receives a medical image and outputs an image-based lesion prediction result;
    임상 데이터를 입력받고 임상 데이터 기반 병변예측결과를 출력하는 임상 데이터 기반의 학습모델을 학습하는 적어도 하나의 임상 데이터 기반 학습부와,At least one clinical data-based learning unit for learning a clinical data-based learning model that receives clinical data and outputs a clinical data-based lesion prediction result;
    상기 영상기반 병변예측결과 및 상기 임상 데이터 기반 병변예측결과를 입력받고, 최종 병변예측결과를 출력하는 통합 학습모델을 앙상블 학습(Ensemble learning)하는 통합 학습부를 포함하는 것을 특징으로 하는 병변 통합 학습 장치. And an integrated learning unit configured to ensemble learning an integrated learning model for receiving the image-based lesion prediction result and the clinical data-based lesion prediction result and outputting a final lesion prediction result.
  2. 제1항에 있어서According to claim 1
    상기 영상기반 학습부는,The image-based learning unit,
    상기 의료영상을 입력받고 병변예측결과를 출력하되, 서로 다른 학습구조로 이루어진 복수의 영상기반 학습모델과,Receiving the medical image and outputting a lesion prediction result, a plurality of image-based learning models having different learning structures,
    상기 복수의 영상기반 학습모델을 통해 출력되는 상기 복수의 병변예측결과를 입력받고 이에 대응되는 상기 영상기반 병변예측결과를 앙상블 학습(Ensemble learning)하는 영상기반 통합학습 모델을 포함하는 것을 특징으로 하는 병변 통합 학습 장치. A lesion comprising an image-based integrated learning model for receiving the plurality of lesion prediction results output through the plurality of image-based learning models and ensemble learning the image-based lesion prediction results corresponding thereto. Integrated learning device.
  3. 제1항에 있어서According to claim 1
    상기 영상기반 학습부는,The image-based learning unit,
    상기 의료영상의 입력에 대응하여 병변 영역의 영상을 검출 및 출력하는 병변영역 검출모델을 포함하는 진단 영역 검출부를 더 포함하고,Further comprising a diagnosis region detection unit including a lesion region detection model for detecting and outputting the image of the lesion region in response to the input of the medical image,
    상기 병변 영역 검출부는, The lesion area detection unit,
    상기 병변 영역의 영상을 상기 복수의 영상기반 학습모델의 입력으로 제공하는 것을 특징으로 하는 병변 통합 학습 장치. The integrated lesion learning apparatus, characterized in that providing the image of the lesion region as an input of the plurality of image-based learning models.
  4. 제1항에 있어서According to claim 1
    상기 임상 데이터 기반 학습부는,The clinical data-based learning unit,
    생체신호를 입력으로하고 병변의 중증도를 수치화한 결과를 출력으로 하는 적어도 하나의 생체신호 기반 학습모델을 포함하는 것을 특징으로 하는 병변 통합 학습 장치. And at least one bio-signal-based learning model that receives a bio-signal as an input and outputs a result of digitizing the severity of the lesion.
  5. 제1항에 있어서According to claim 1
    상기 임상 데이터 기반 학습부는,The clinical data-based learning unit,
    상기 생체신호의 노이즈를 제거하는 노이즈 필터부와,A noise filter for removing noise from the bio-signal,
    상기 생체신호로부터 학습에 사용할 진단구간을 검출하는 진단구간 검출부를 포함하는 것을 특징으로 하는 병변 통합 학습 장치. And a diagnostic section detection unit that detects a diagnostic section to be used for learning from the biological signal.
  6. 제1항에 있어서,The method of claim 1,
    상기 적어도 하나의 생체신호 기반 학습모델은, The at least one bio-signal-based learning model,
    RNN(Recurrent Neural Network, 순환신경망) 방식을 기반으로하는 학습모델인 것을 특징으로 하는 병변 통합 학습 장치. Lesion integrated learning device, characterized in that the learning model based on the RNN (Recurrent Neural Network) method.
  7. 제1항에 있어서According to claim 1
    상기 임상 데이터 기반 학습부는,The clinical data-based learning unit,
    임상시험을 통해 획득한 적어도 하나의 임상정보를 입력으로하고 병변의 중증도를 수치화한 결과를 출력으로 하는 적어도 하나의 임상정보 기반 학습모델을 포함하는 것을 특징으로 하는 병변 통합 학습 장치. And at least one clinical information-based learning model that receives at least one clinical information acquired through a clinical trial as an input and outputs a result of quantifying the severity of the lesion.
  8. 제7항에 있어서According to claim 7
    상기 임상 데이터 기반 학습부는,The clinical data-based learning unit,
    상기 임상시험을 통해 획득한 적어도 하나의 임상정보를 정규화하여 제공하는 데이터 정규화부를 더 포함하는 것을 특징으로 하는 병변 통합 학습 장치. And a data normalization unit that normalizes and provides at least one clinical information obtained through the clinical trial.
  9. 제1항에 있어서According to claim 1
    상기 적어도 하나의 임상정보 기반 학습모델은,FFNN(Feed-Foward Neural Nework) 방식을 기반으로하는 학습모델인 것을 특징으로 하는 병변 통합 학습 장치. The at least one clinical information-based learning model, Lesion integration learning apparatus, characterized in that the learning model based on the FFNN (Feed-Foward Neural Nework) method.
  10. 병변 검출을 위한 학습모델을 학습하는 방법에 있어서,In the method of learning a learning model for lesion detection,
    의료영상을 입력받고 영상기반 병변예측결과를 출력하는 적어도 하나의 영상기반 학습모델을 학습하는 과정과,A process of learning at least one image-based learning model that receives medical images and outputs image-based lesion prediction results; and
    생체신호를 입력받고, 상기 생체신호에 대응되는 생체신호 기반 병변예측결과를 출력으로 하는 적어도 하나의 생체신호 기반 학습모델을 학습하는 과정과, A process of learning at least one bio-signal-based learning model for receiving a bio-signal and outputting a bio-signal-based lesion prediction result corresponding to the bio-signal;
    임상시험을 통해 획득한 적어도 하나의 임상정보를 입력받고, 상기 적어도 하나의 임상정보에 대응되는 임상 데이터 기반 병변예측결과를 출력하는 적어도 하나의 임상정보 기반 학습모델을 학습하는 과정과,A process of learning at least one clinical information-based learning model for receiving at least one clinical information acquired through a clinical trial and outputting a clinical data-based lesion prediction result corresponding to the at least one clinical information,
    상기 영상기반 병변예측결과, 생체신호 기반 병변예측결과, 및 임상 데이터 기반 병변예측결과를 입력받고, 최종 병변예측결과를 출력하는 통합 학습모델을 앙상블 학습(Ensemble learning)하는 과정을 포함하는 것을 특징으로 하는 병변 통합 학습 방법. And ensemble learning an integrated learning model for receiving the image-based lesion prediction result, the biosignal-based lesion prediction result, and the clinical data-based lesion prediction result, and outputting the final lesion prediction result. Integrative learning method for lesions.
  11. 병변을 학습한 학습모델을 사용하여 병변을 진단하는 장치에 있어서,In the device for diagnosing a lesion using a learning model that has learned the lesion,
    적어도 하나의 영상기반 학습모델을 사용하여 입력된 의료영상에 대응되는 영상기반 병변예측결과를 출력하는 영상기반 예측부와,An image-based prediction unit that outputs an image-based lesion prediction result corresponding to an input medical image using at least one image-based learning model;
    임상 데이터 기반의 학습모델을 사용하여, 입력된 임상 데이터에 대응되는 임상 데이터 기반 병변예측결과를 각각 출력하는 적어도 하나의 임상 데이터 기반 예측부와,At least one clinical data-based prediction unit for outputting a clinical data-based lesion prediction result corresponding to the input clinical data, using a clinical data-based learning model,
    상기 영상기반 병변예측결과 및 상기 임상 데이터 기반 병변예측결과를 통합 학습모델에 입력하고, 상기 통합 학습모델을 통해 출력되는 최종 병변예측결과를 확인하는 통합 진단부를 포함하는 것을 특징으로 하는 병변 진단 장치. And an integrated diagnosis unit configured to input the image-based lesion prediction result and the clinical data-based lesion prediction result into an integrated learning model, and check a final lesion prediction result output through the integrated learning model.
  12. 제11항에 있어서The method of claim 11
    상기 영상기반 예측부는,The image-based prediction unit,
    상기 의료영상을 입력받고 병변예측결과를 출력하되, 서로 다른 학습구조로 이루어진 복수의 영상기반 학습모델과,Receiving the medical image and outputting a lesion prediction result, a plurality of image-based learning models having different learning structures,
    상기 복수의 영상기반 학습모델을 통해 출력되는 상기 복수의 병변예측결과를 입력받고 상기 영상기반 병변예측결과를 출력하는 영상기반 통합학습 모델을 포함하는 것을 특징으로 하는 병변 진단 장치. And an image-based integrated learning model for receiving the plurality of lesion prediction results output through the plurality of image-based learning models and outputting the image-based lesion prediction results.
  13. 제11항에 있어서The method of claim 11
    상기 영상기반 예측부는,The image-based prediction unit,
    상기 의료영상의 입력에 대응하여 병변 영역의 영상을 검출 및 출력하는 병변 영역 검출모델을 포함하는 병변 영역 검출부를 더 포함하고,Further comprising a lesion region detection unit including a lesion region detection model for detecting and outputting the image of the lesion region in response to the input of the medical image,
    상기 병변 영역 검출부는, The lesion area detection unit,
    상기 병변 영역의 영상을 상기 복수의 영상기반 학습모델의 입력으로 제공하는 것을 특징으로 하는 병변 진단 장치. And providing the image of the lesion region as an input of the plurality of image-based learning models.
  14. 제11항에 있어서The method of claim 11
    상기 임상 데이터 기반 예측부는,The clinical data-based prediction unit,
    생체신호를 입력으로하고 병변의 중증도를 수치화한 결과를 출력으로 하는 적어도 하나의 생체신호 기반 학습모델을 포함하는 것을 특징으로 하는 병변 진단 장치. A lesion diagnosis apparatus comprising: at least one bio-signal-based learning model that receives a bio-signal as an input and outputs a result of quantifying the severity of the lesion.
  15. 제11항에 있어서The method of claim 11
    상기 임상 데이터 기반 예측부는,The clinical data-based prediction unit,
    상기 생체신호의 노이즈를 제거하는 노이즈 필터부와,A noise filter to remove noise from the bio-signal,
    상기 생체신호로부터 학습에 사용할 진단구간을 검출하는 진단구간 검출부를 포함하는 것을 특징으로 하는 병변 진단 장치. And a diagnostic section detector for detecting a diagnostic section to be used for learning from the biological signal.
  16. 제11항에 있어서The method of claim 11
    상기 적어도 하나의 생체신호 기반 학습모델은, The at least one bio-signal-based learning model,
    RNN(Recurrent Neural Network, 순환신경망) 방식을 기반으로하는 학습모델인 것을 특징으로 하는 병변 진단 장치. A lesion diagnosis apparatus, characterized in that it is a learning model based on a recurrent neural network (RNN) method.
  17. 제11항에 있어서The method of claim 11
    상기 임상 데이터 기반 예측부는,The clinical data-based prediction unit,
    임상시험을 통해 획득한 적어도 하나의 임상정보를 입력으로하고 병변의 중증도를 수치화한 결과를 출력으로 하는 적어도 하나의 임상정보 기반 학습모델을 포함하는 것을 특징으로 하는 병변 진단 장치. A lesion diagnosis apparatus comprising: at least one clinical information-based learning model for inputting at least one clinical information acquired through a clinical trial and outputting a result of digitizing the severity of the lesion.
  18. 제17항에 있어서The method of claim 17
    상기 임상 데이터 기반 예측부는,The clinical data-based prediction unit,
    상기 임상시험을 통해 획득한 적어도 하나의 임상정보를 정규화하여 제공하는 데이터 정규화부를 더 포함하는 것을 특징으로 하는 병변 진단 장치. And a data normalization unit that normalizes and provides at least one clinical information acquired through the clinical trial.
  19. 제11항에 있어서The method of claim 11
    상기 적어도 하나의 임상정보 기반 학습모델은,The at least one clinical information-based learning model,
    FFNN(Feed-Foward Neural Nework) 방식을 기반으로하는 학습모델인 것을 특징으로 하는 병변 통합 학습 장치. Lesion integrated learning device, characterized in that the learning model based on the FFNN (Feed-Foward Neural Nework) method.
  20. 병변을 학습한 학습모델을 사용하여 병변을 진단하는 방법에 있어서,In a method for diagnosing a lesion using a learning model that has learned the lesion,
    영상기반 학습모델을 사용하여, 입력된 의료영상에 대응되는 영상기반 병변예측결과를 출력하는 과정과,The process of outputting an image-based lesion prediction result corresponding to an input medical image using an image-based learning model, and
    생체신호 기반 학습모델을 사용하여, 생체신호의 입력에 대응되는 생체신호 기반 병변예측결과를 확인하는 과정과,The process of confirming the result of the biosignal-based lesion prediction corresponding to the input of the biosignal using the biosignal-based learning model, and
    임상정보 기반 학습모델을 사용하여, 임상시험을 통해 획득한 적어도 하나의 임상정보에 대응되는 임상 데이터 기반 병변예측결과를 확인하는 과정과,The process of confirming the clinical data-based lesion prediction result corresponding to at least one clinical information acquired through a clinical trial by using a clinical information-based learning model, and
    상기 영상기반 병변예측결과, 생체신호 기반 병변예측결과, 및 임상 데이터 기반 병변예측결과를 통합 학습모델에 입력하고, 상기 통합 학습모델을 통해 출력되는 최종 병변예측결과를 확인하는 과정을 포함하는 것을 특징으로 하는 병변 진단 방법. And inputting the image-based lesion prediction result, the biosignal-based lesion prediction result, and the clinical data-based lesion prediction result into an integrated learning model, and confirming a final lesion prediction result output through the integrated learning model. How to diagnose lesions by using.
PCT/KR2020/006971 2019-05-29 2020-05-29 Artificial intelligence-based diagnosis support system using ensemble learning algorithm WO2020242239A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0063322 2019-05-29
KR1020190063322A KR102100698B1 (en) 2019-05-29 2019-05-29 System for diagnosis auxiliary based on artificial intelligence using ensemble learning algorithm

Publications (1)

Publication Number Publication Date
WO2020242239A1 true WO2020242239A1 (en) 2020-12-03

Family

ID=70912678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/006971 WO2020242239A1 (en) 2019-05-29 2020-05-29 Artificial intelligence-based diagnosis support system using ensemble learning algorithm

Country Status (2)

Country Link
KR (1) KR102100698B1 (en)
WO (1) WO2020242239A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409944A (en) * 2021-06-25 2021-09-17 清华大学深圳国际研究生院 Obstructive sleep apnea detection method and device based on deep learning
US20220130544A1 (en) * 2020-10-23 2022-04-28 Remmie, Inc Machine learning techniques to assist diagnosis of ear diseases

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102100698B1 (en) * 2019-05-29 2020-05-18 (주)제이엘케이 System for diagnosis auxiliary based on artificial intelligence using ensemble learning algorithm
WO2021162488A2 (en) * 2020-02-10 2021-08-19 주식회사 바디프랜드 Method for disease prediction and apparatus for performing same
KR102605837B1 (en) * 2020-07-13 2023-11-29 가톨릭대학교 산학협력단 Cancer progression/recurrence prediction system using multiple images
WO2022015000A1 (en) * 2020-07-13 2022-01-20 가톨릭대학교 산학협력단 Cancer progression/relapse prediction system and cancer progression/relapse prediction method using multiple images
KR20220065927A (en) * 2020-11-13 2022-05-23 (주)루티헬스 Apparatus and method for interpreting medical image
KR102317857B1 (en) * 2020-12-14 2021-10-26 주식회사 뷰노 Method to read lesion
KR102503609B1 (en) * 2021-02-01 2023-02-24 주식회사 코스모스메딕 Virtual patient information generating system and method using machine learning
KR102316525B1 (en) * 2021-03-08 2021-10-22 주식회사 딥바이오 Method for training artificial neural network for detecting prostate cancer from TURP pathological image, and computing system performing the same
KR102359362B1 (en) * 2021-09-16 2022-02-08 주식회사 스카이랩스 Deep learning-based blood pressure estimation system using PPG signal sensing ring

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170140757A (en) * 2016-06-10 2017-12-21 한국전자통신연구원 A clinical decision support ensemble system and the clinical decision support method by using the same
KR101857624B1 (en) * 2017-08-21 2018-05-14 동국대학교 산학협력단 Medical diagnosis method applied clinical information and apparatus using the same
KR20180057300A (en) * 2016-11-22 2018-05-30 네이버 주식회사 Method and system for predicting prognosis from diagnostic histories using deep learning
KR101884609B1 (en) * 2017-05-08 2018-08-02 (주)헬스허브 System for diagnosing disease through modularized reinforcement learning
KR20190030151A (en) * 2017-09-13 2019-03-21 이재준 Apparatus, method and computer program for analyzing image
KR102100698B1 (en) * 2019-05-29 2020-05-18 (주)제이엘케이 System for diagnosis auxiliary based on artificial intelligence using ensemble learning algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170140757A (en) * 2016-06-10 2017-12-21 한국전자통신연구원 A clinical decision support ensemble system and the clinical decision support method by using the same
KR20180057300A (en) * 2016-11-22 2018-05-30 네이버 주식회사 Method and system for predicting prognosis from diagnostic histories using deep learning
KR101884609B1 (en) * 2017-05-08 2018-08-02 (주)헬스허브 System for diagnosing disease through modularized reinforcement learning
KR101857624B1 (en) * 2017-08-21 2018-05-14 동국대학교 산학협력단 Medical diagnosis method applied clinical information and apparatus using the same
KR20190030151A (en) * 2017-09-13 2019-03-21 이재준 Apparatus, method and computer program for analyzing image
KR102100698B1 (en) * 2019-05-29 2020-05-18 (주)제이엘케이 System for diagnosis auxiliary based on artificial intelligence using ensemble learning algorithm

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220130544A1 (en) * 2020-10-23 2022-04-28 Remmie, Inc Machine learning techniques to assist diagnosis of ear diseases
CN113409944A (en) * 2021-06-25 2021-09-17 清华大学深圳国际研究生院 Obstructive sleep apnea detection method and device based on deep learning

Also Published As

Publication number Publication date
KR102100698B1 (en) 2020-05-18

Similar Documents

Publication Publication Date Title
WO2020242239A1 (en) Artificial intelligence-based diagnosis support system using ensemble learning algorithm
WO2020235966A1 (en) Device and method for processing medical image using predicted metadata
WO2021049729A1 (en) Method for predicting likelihood of developing lung cancer by using artificial intelligence model, and analysis device therefor
WO2017022882A1 (en) Apparatus for classifying pathological diagnosis of medical image, and pathological diagnosis system using same
WO2017095014A1 (en) Cell abnormality diagnosing system using dnn learning, and diagnosis managing method of same
WO2020139009A1 (en) Cerebrovascular disease learning device, cerebrovascular disease detection device, cerebrovascular disease learning method, and cerebrovascular disease detection method
WO2021025461A1 (en) Ultrasound image-based diagnosis system for coronary artery lesion using machine learning and diagnosis method of same
WO2021060899A1 (en) Training method for specializing artificial intelligence model in institution for deployment, and apparatus for training artificial intelligence model
WO2020076133A1 (en) Validity evaluation device for cancer region detection
WO2019235828A1 (en) Two-face disease diagnosis system and method thereof
WO2021006522A1 (en) Image diagnosis apparatus using deep learning model and method therefor
WO2021137454A1 (en) Artificial intelligence-based method and system for analyzing user medical information
WO2021071288A1 (en) Fracture diagnosis model training method and device
WO2021261808A1 (en) Method for displaying lesion reading result
WO2022131642A1 (en) Apparatus and method for determining disease severity on basis of medical images
WO2019045385A1 (en) Image alignment method and device therefor
WO2021002669A1 (en) Apparatus and method for constructing integrated lesion learning model, and apparatus and method for diagnosing lesion by using integrated lesion learning model
WO2021201582A1 (en) Method and device for analyzing causes of skin lesion
WO2020222555A1 (en) Image analysis device and method
WO2020116878A1 (en) Intracranial aneurysm prediction device using fundus photo, and method for providing intracranial aneurysm prediction result
WO2023095989A1 (en) Method and device for analyzing multimodality medical images for cerebral disease diagnosis
WO2022119347A1 (en) Method, apparatus, and recording medium for analyzing coronary plaque tissue through ultrasound image-based deep learning
WO2020076134A1 (en) Device and method for correcting cancer region information
WO2021225226A1 (en) Alzheimer diagnosis device and method
WO2020075991A1 (en) Apparatus and method for evaluating state of cancer region

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20815321

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/04/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20815321

Country of ref document: EP

Kind code of ref document: A1