WO2021002669A1 - Appareil et procédé pour construire un modèle d'apprentissage de lésion intégré, et appareil et procédé pour diagnostiquer une lésion à l'aide d'un modèle d'apprentissage de lésion intégré - Google Patents

Appareil et procédé pour construire un modèle d'apprentissage de lésion intégré, et appareil et procédé pour diagnostiquer une lésion à l'aide d'un modèle d'apprentissage de lésion intégré Download PDF

Info

Publication number
WO2021002669A1
WO2021002669A1 PCT/KR2020/008583 KR2020008583W WO2021002669A1 WO 2021002669 A1 WO2021002669 A1 WO 2021002669A1 KR 2020008583 W KR2020008583 W KR 2020008583W WO 2021002669 A1 WO2021002669 A1 WO 2021002669A1
Authority
WO
WIPO (PCT)
Prior art keywords
integrated
lesion
learning model
image
severity
Prior art date
Application number
PCT/KR2020/008583
Other languages
English (en)
Korean (ko)
Inventor
김원태
강신욱
이명재
김동민
남동연
Original Assignee
(주)제이엘케이
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)제이엘케이 filed Critical (주)제이엘케이
Publication of WO2021002669A1 publication Critical patent/WO2021002669A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Definitions

  • the present disclosure relates to a deep learning model learning technology, and more specifically, a method and apparatus for learning about a lesion based on a medical image, and a method for diagnosing a lesion using a learning model built based on a medical image. And about the device.
  • Deep learning is to learn a very large amount of data, and when new data is input, the highest probability is selected based on the learning result.
  • Such deep learning can operate adaptively according to an image, and since feature factors are automatically found in the process of learning a model based on data, attempts to utilize this in the field of artificial intelligence are increasing.
  • Certain lesions may show signs in a specific area of the body, but some specific lesions may appear complex in various areas of the body, and changes in the body may also appear complex. Therefore, it is difficult to detect a disease or lesion simply by considering only symptoms or signs appearing in a specific area of the patient.
  • a disease such as systemic leukoplakia, which is a kind of rheumatic disease, may exhibit simultaneous and multiple symptoms throughout the body.
  • Another technical task of the present disclosure is to provide a method and apparatus for integrated learning of lesions that comprehensively reflect medical images for various diseases, but build an integrated learning model based on continuous learning using GAN (General Adversial Networks). To provide.
  • GAN General Adversial Networks
  • Another technical task of the present disclosure is a method of complexly learning the progress of the disease, the relationship between diseases, the metastasis state, etc. by comprehensively reflecting and learning the symptoms or signs that are variously expressed in the body by various diseases, and To provide a device.
  • Another technical problem of the present disclosure is a diagnostic method capable of comprehensively predicting the progress of the disease, the relationship between the diseases, the metastasis state, etc. by comprehensively reflecting the symptoms or signs that are variously expressed in the body by various diseases, and To provide a device.
  • an apparatus for learning lesion integration receives a plurality of diagnostic images taken for diagnosis of different diseases, an image normalization unit for normalizing the plurality of diagnostic images, and a lesion in response to the input of the plurality of normalized images.
  • a lesion region learning unit that learns the existing lesion region, a lesion image extraction unit that extracts a uniform lesion region image based on the lesion region, and the disease type corresponding to the input of the uniformized lesion region image
  • it may include a severity integrated learning unit for learning the severity of the disease.
  • a method for integrating lesion learning includes a process of receiving a plurality of diagnostic images photographed for diagnosis of different diseases, normalizing the plurality of diagnostic images, and the presence of a lesion in response to the input of the plurality of normalized images.
  • the process of learning the lesion area to be performed, the process of extracting a uniformized lesion area image based on the lesion area, and learning the type of the disease and the severity of the disease in response to the input of the uniformized lesion area image. May include a process.
  • an apparatus for diagnosing a lesion may be provided.
  • the device receives a diagnostic image taken for diagnosis of a disease, uses an image normalization unit for normalizing the diagnostic image, and a lesion region integrated learning model, and uses a lesion region corresponding to the normalized diagnostic image image.
  • a lesion region detection unit that detects a lesion region
  • a lesion image extraction unit that extracts a uniform lesion region image based on the lesion region, and the type of the disease and the severity of the disease corresponding to the input of the uniform lesion region image.
  • It may include a severity integrated detection unit that detects the type of the disease and the severity of the disease by using the integrated learning model of the severity learned to be detected.
  • a method for diagnosing a lesion includes a process of receiving a diagnostic image taken for diagnosis of a disease, normalizing the diagnostic image, and detecting a lesion region corresponding to the normalized diagnostic image using a lesion region integrated learning model.
  • a method and apparatus for learning lesion severity may be provided by comprehensively considering medical images for various diseases.
  • a method and apparatus for integrated lesion learning that comprehensively reflects medical images for various diseases, but constructs an integrated learning model based on continuous learning using GAN (General Adversial Networks) Can be provided.
  • GAN General Adversial Networks
  • a method and apparatus for complex learning of the progress of the disease, the relationship between diseases, the metastasis state, etc., by comprehensively reflecting and learning the symptoms or signs that are variously expressed in the body due to various diseases can be provided.
  • a diagnostic method and apparatus capable of comprehensively predicting the progress of the disease, the relationship between the diseases, the metastasis state, etc., by comprehensively reflecting the symptoms or signs variously expressed in the body due to various diseases can be provided.
  • FIG. 1 is a block diagram showing the configuration of a lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating a learning data set used in the apparatus for learning lesion integration according to an embodiment of the present disclosure.
  • FIG 3 is a diagram illustrating a learning operation of a lesion region integrated learning model provided in the lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • FIG. 4 is a diagram illustrating a learning operation of a severity integrated learning model provided in the device for integrated learning of a lesion according to an embodiment of the present disclosure.
  • FIG. 5 is a diagram illustrating a GAN-based lesion area integrated learning model constructed by the lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a diagram illustrating another example of a GAN-based lesion region integrated learning model constructed by the lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • FIG. 7 is a diagram illustrating a severity integrated learning model constructed by a lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a diagram illustrating another example of a severity integrated learning model constructed by a lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is a block diagram showing the configuration of an integrated lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • FIG. 10 is a flowchart illustrating a procedure of a method for learning lesion integration according to an embodiment of the present disclosure.
  • FIG. 11 is a flowchart illustrating a procedure of a method for integrated lesion diagnosis according to an embodiment of the present disclosure.
  • FIG. 12 is a block diagram illustrating a method and apparatus for integrated learning of a lesion and a computing system for executing the method and apparatus for diagnosing a lesion according to an embodiment of the present disclosure.
  • a component when a component is said to be “connected”, “coupled” or “connected” with another component, it is not only a direct connection relationship, but an indirect connection relationship in which another component exists in the middle. It can also include.
  • a certain component when a certain component “includes” or “have” another component, it means that other components may be further included rather than excluding other components unless otherwise stated. .
  • first and second are used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise noted. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is a first component in another embodiment. It can also be called.
  • components that are distinguished from each other are intended to clearly describe each feature, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated to be formed in one hardware or software unit, or one component may be distributed in a plurality of hardware or software units. Therefore, even if not stated otherwise, such integrated or distributed embodiments are also included in the scope of the present disclosure.
  • components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other elements in addition to the elements described in the various embodiments are included in the scope of the present disclosure.
  • FIG. 1 is a block diagram showing the configuration of a lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • the integrated lesion learning apparatus 10 may be configured to receive a plurality of diagnostic images taken for diagnosis of different diseases, and to learn the type of disease and the severity of the disease corresponding thereto. It may include a normalization unit 11, a lesion region learning unit 13, an image extraction unit 15, and a disease severity learning unit 17.
  • a plurality of diagnostic images photographed for diagnosis of different diseases may be photographed in different formats or may have different attributes according to the characteristics of the disease.
  • the diagnostic image is made of a similar type of image.
  • the diagnostic image may include magnetic resonance imaging (MRI), computerized tomography (CT) images, and X-ray images.
  • the image normalization unit 11 may normalize a plurality of diagnostic images captured for diagnosis of different diseases. For example, the image normalization unit 11 may convert the RGB value of each pixel included in the diagnostic image into a value obtained by dividing the RGB value by a predetermined value (eg, 255).
  • the size of the image may be configured differently according to the type of the diagnostic image.
  • the integrated lesion learning apparatus 10 may further include an image resizing unit 12 for resizing the size of the diagnostic image to the same size.
  • the lesion area learning unit 13 may be provided with a learning model for receiving diagnostic images corresponding to various diseases and detecting a lesion area corresponding thereto, that is, an integrated learning model for the lesion area. Learning can be carried out. Furthermore, it is preferable that the lesion region learning unit 13 learns the lesion region integrated learning model based on continuous learning using General Adversial Networks (GAN).
  • GAN General Adversial Networks
  • the size of the lesion area may be variously configured.
  • the image extraction unit 15 enlarges or reduces the lesion area, and enlarges or reduces the size of the lesion area.
  • the image of the lesion area can be reconstructed to a predetermined size.
  • the disease severity learning unit 17 may construct a learning model for learning the type of disease and the severity of the disease, that is, a severity integrated learning model, in response to an input of an image of a lesion area having a uniform size.
  • a learning model for learning the type of disease and the severity of the disease that is, a severity integrated learning model
  • GAN General Adversial Networks
  • the integrated lesion learning apparatus 10 may prepare a training data set 200 (refer to FIG. 2) for learning the above-described lesion area integrated learning model or the severity integrated learning model.
  • the learning data set 200 includes diagnostic images 201, 202, and 203 photographing diagnostic regions of patients with different diseases, and an image of a lesion region extracted from the diagnostic images 201, 202, and 203. (211, 212, 213), and data (221, 222, 223) indicating the type of disease (221a, 222a, 223a) and the severity of the disease (221b, 222b, 223b) corresponding to the lesion area. have.
  • FIG 3 is a diagram illustrating a learning operation of a lesion region integrated learning model provided in the lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • the lesion region learning unit 13 may perform learning on the lesion region integrated learning model 300, and in this case, diagnostic images 201, 202, and 203 may be used as inputs.
  • the diagnostic images 201, 202, and 203 input to the lesion area integrated learning model 300 may be images normalized through the image normalization unit 11 described above.
  • the lesion region learning unit 13 may set and provide a target variable of the lesion region integrated learning model 300 as images 211, 212, and 213 of the lesion region.
  • the lesion area integrated learning model 300 can be learned to detect the images 211, 212, 213 of the lesion area corresponding to the diagnostic images 201, 202, and 203, and further, the lesion area integrated learning model Reference numeral 300 may be learned based on a convolutional neural network (CNN) technique or a pooling technique.
  • CNN convolutional neural network
  • the lesion region integrated learning model 300 may extract features of the diagnostic images 201, 202, and 203 using a general convolutional neural network (CNN) technique or a pooling technique.
  • the pooling technique may include at least one of a max pooling technique and an average pooling technique.
  • the pooling technique referred to in the present disclosure is not limited to the max pooling technique or the average pooling technique, and includes any technique for obtaining a representative value of an image region of a predetermined size.
  • the representative value used in the pooling technique may be at least one of a variance value, a standard deviation value, a mean value, a most frequent value, a minimum value, and a weighted average value, in addition to the maximum value and the average value.
  • FIG. 4 is a diagram illustrating a learning operation of a severity integrated learning model provided in the device for integrated learning of a lesion according to an embodiment of the present disclosure.
  • the disease severity learning unit 17 may perform learning on the severity integrated learning model 400, and at this time, images 211, 212, and 213 of the lesion area may be used as inputs.
  • the images 211, 212, and 213 of the lesion area input to the severity integrated learning model 400 may be images extracted or resized through the image extraction unit 15 described above.
  • disease severity learning unit 17 may set and provide the target variable of the severity integrated learning model 400 as severity data 221, 222, and 223.
  • the severity integrated learning model 400 learns to detect the type of disease (221a, 222a, 223a) or the severity of the disease (221b, 222b, 223b) corresponding to the images 211, 212, 213 of the lesion area. Furthermore, the severity integrated learning model 400 may be learned based on a convolutional neural network (CNN) technique or a pooling technique.
  • CNN convolutional neural network
  • the severity integrated learning model 400 may extract features of the lesion area images 211, 212, 213 using a general convolutional neural network (CNN) technique or a pooling technique.
  • CNN convolutional neural network
  • the convolutional neural network is used to extract "features" such as borders, line colors, etc. from input data (images). It may be, and may include a plurality of layers. Each layer may receive input data and may generate output data by processing the input data of the layer.
  • the convolutional neural network may output an input image or a feature map generated by convolving an input feature map with filter kernels as output data.
  • the initial layers of the convolutional neural network can be operated to extract low-level features such as edges or gradients from the input.
  • the next layers of the neural network can gradually extract more complex features, such as a prostate region or a body region such as a brain region.
  • the convolutional neural network may include a convolutional layer in which a convolution operation is performed, as well as a pooling layer in which a pooling operation is performed.
  • the pooling technique is a technique used to reduce the spatial size of data in the pooling layer.
  • the pooling technique includes a max pooling technique that selects the maximum value in a corresponding area and an average pooling technique that selects an average value of the area. In the image recognition field, the max pooling technique is generally used. do.
  • the pooling window size and interval (stride) are set to the same value.
  • the stride refers to an interval to be moved when a filter is applied to input data, that is, an interval to which the filter is moved, and stride may also be used to adjust the size of output data.
  • FIG. 5 is a diagram illustrating a lesion region integrated learning model based on continuous learning using a GAN constructed by the lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • the lesion area learning unit 13 may build a lesion area integrated learning model 500 based on continuous learning using GAN.
  • the GAN-based integrated learning model 500 may include a plurality of integrated learning models, and each integrated learning model may construct a plurality of learning models by using different diagnostic images as inputs.
  • a plurality of learning models can be sequentially constructed by continuously using the results detected through other integrated learning models as additional inputs.
  • the GAN-based integrated learning model 500 may include a first integrated learning model 510, a second integrated learning model 520, and a third integrated learning model 530.
  • the lesion area learning unit 13 receives a first diagnostic image (eg, MRI of a prostate area of a prostate cancer patient) as an input, and sets the first lesion area image (eg, an image of a prostate cancer area) as a target variable. 1
  • the integrated learning model 510 may be trained. Since the first integrated learning model 510 performs GAN-based learning, it is possible to generate real data and fake data corresponding to the first lesion area image.
  • the lesion area learning unit 13 uses a second diagnostic image (eg, MRI of a brain area of a brain tumor patient) as an input to the second integrated learning model 520, and a second lesion area image (eg, a brain tumor area).
  • Image can be set as a target variable, and in this case, additionally, the output value of the first integrated learning model 510, that is, real data and fake data corresponding to the first lesion area image May be set together as an input of the second integrated learning model 520.
  • the second integrated learning model 520 can construct a learning model that reflects both the first diagnostic image and the second diagnostic image, and considers both the first diagnostic image and the second diagnostic image, that is, the actual ( Real and fake data can be output.
  • the lesion area learning unit 13 uses a third diagnostic image (eg, MRI of the brain area of an Alzheimer's patient) as an input to the third integrated learning model 530, and a third lesion area image (eg, Alzheimer's expression area). Image) as an objective variable to construct a third integrated learning model 530.
  • the lesion area learning unit 13 may set the output values of the second integrated learning model 520, that is, real data and fake data, as inputs of the third integrated learning model 530.
  • the third integrated learning model 530 can construct a learning model that reflects all of the first to third diagnostic images, and outputs data obtained by detecting the lesion area in consideration of all the first to third diagnostic images. I can.
  • FIG. 6 is a diagram illustrating another example of a GAN-based lesion region integrated learning model constructed by the lesion integrated learning apparatus according to an embodiment of the present disclosure.
  • another example of the GAN-based lesion area integrated learning model may include a plurality of the GAN-based integrated learning models of FIG. 5 described above, and output values of the plurality of integrated learning models 610, 620, 630
  • An ensemble learning model 650 for ensemble learning may be provided.
  • each of the integrated learning models 610, 620, and 630 may correspond to the GAN-based integrated learning model 500 of FIG. 5 described above, and the order of configuring the GAN may be set differently.
  • the first GAN-based integrated learning model 610 may include a first integrated learning model 611, a second integrated learning model 612, and a third integrated learning model 613.
  • the input of the first integrated learning model 611 is set as the first diagnostic image (eg, MRI of the prostate area of a prostate cancer patient), and the target variable is the first lesion area image (eg, the image of the prostate cancer area).
  • the first integrated learning model 612 performs GAN-based learning, it is possible to generate real data and fake data corresponding to the first lesion area image.
  • the input of the second integrated learning model 612 is a second diagnostic image (e.g., MRI of a brain region of a brain tumor patient) and an output value of the first integrated learning model 611 (the actual image corresponding to the first lesion region image). (real) data and fake data), and the target variable may be set as a second lesion area image (eg, an image of a brain tumor area).
  • the second integrated learning model 612 can construct a learning model that reflects both the first diagnostic image and the second diagnostic image, and data in consideration of both the first diagnostic image and the second diagnostic image, that is, the actual ( Real and fake data can be output.
  • the input of the third integrated learning model 613 is a third diagnostic image (e.g., MRI of a brain region of a patient with Alzheimer's), and an output value of the second integrated learning model 612 (real data and fake ( fake) data), and the target variable may be set as a third lesion area image (eg, an image of an Alzheimer's expression area).
  • the third integrated learning model 613 can construct a learning model that reflects all of the first to third diagnostic images, and is configured to output data that detects the lesion area in consideration of all the first to third diagnostic images. Can be.
  • the second GAN-based integrated learning model 620 may also include a first integrated learning model 621, a second integrated learning model 622, and a third integrated learning model 623.
  • the order of input data and target variables may be set differently.
  • the input of the first integrated learning model 621 is set as a second diagnostic image (e.g., MRI of a brain region of a brain tumor patient), and the target variable is a second lesion region image (e.g., an image of a brain tumor region). Can be set. Since the first integrated learning model 621 performs GAN-based learning, it is possible to generate real data and fake data corresponding to the second lesion area image.
  • the input of the second integrated learning model 622 is the first diagnostic image (e.g., MRI of the prostate region of a prostate cancer patient) and the output value of the first integrated learning model 621 (that is, the second lesion region image). Corresponding real data and fake data) may be set, and the target variable may be set as a first lesion area image (eg, an image of a prostate cancer area).
  • the input of the third integrated learning model 623 is a third diagnostic image (e.g., MRI of the brain region of a patient with Alzheimer's), and the output value of the second integrated learning model 622 (real data and fake ( fake) data), and the target variable may be set as a third lesion area image (eg, an image of an Alzheimer's expression area).
  • the third GAN-based integrated learning model 630 may also include a first integrated learning model 631, a second integrated learning model 632, and a third integrated learning model 633.
  • Input data and target variables of the integrated learning model 610 and the second GAN-based integrated learning model 620 may be set differently in order. That is, the input of the first integrated learning model 631 is set as a third diagnostic image (e.g., MRI of a brain region of a patient with Alzheimer's), and the target variable is an image of a third lesion region (e.g., an image of Alzheimer's expression region). Can be set. Accordingly, the first integrated learning model 621 can generate real data and fake data corresponding to the third lesion area image.
  • a third diagnostic image e.g., MRI of a brain region of a patient with Alzheimer's
  • the target variable is an image of a third lesion region (e.g., an image of Alzheimer's expression region).
  • the first integrated learning model 621 can generate real data and fake data
  • the input of the second integrated learning model 632 is 2 Diagnostic images (e.g., MRI of the brain region of a brain tumor patient) and output values of the first integrated learning model 631 (real and fake data corresponding to the third lesion region image) can be set.
  • the target variable may be set as a second lesion area image (eg, an image of a brain tumor area).
  • the input of the third integrated learning model 633 is a first diagnostic image (e.g., MRI of a prostate region of a prostate cancer patient), and an output value of the second integrated learning model 632 (real data and fake (fake)).
  • Data Data
  • the target variable may be set as a first lesion area image (eg, an image of a prostate cancer area).
  • the plurality of integrated learning models 610, 620, and 630 are configured such that the order of input data and the target variable is set differently from each other. Therefore, even if a learning model is constructed using the same data set, a plurality of integrated learning models 610, 620, and 630 may be constructed to configure different networks.
  • the ensemble learning model 650 is configured to ensemble the outputs of the plurality of integrated learning models 610, 620, and 630, and a predicted value output as a result may be improved.
  • the purpose of constructing the lesion area integrated learning model exemplified through FIGS. 5 and 6 can be applied to the severity integrated learning model.
  • the first to third diagnostic images used as inputs are replaced with first to third lesion area images
  • the first to third lesion area images used as target variables are replaced with first to third disease data ( Type or severity of disease)
  • a severity integrated learning model can be constructed.
  • the severity integrated learning model may be constructed as illustrated in FIGS. 7 and 8.
  • FIG. 9 is a block diagram showing the configuration of an integrated lesion diagnosis apparatus according to an embodiment of the present disclosure.
  • the integrated lesion diagnosis apparatus 90 may be configured to receive a plurality of diagnostic images taken for diagnosis of different diseases, and to detect the type of disease and the severity of the disease corresponding thereto.
  • a normalization unit 91, a lesion area detection unit 93, an image extraction unit 95, and a disease severity detection unit 97 may be included.
  • diagnostic images taken for diagnosis of a disease may be photographed in different formats or may have different properties according to the characteristics of the disease.
  • the diagnostic image may include magnetic resonance imaging (MRI), computerized tomography (CT) images, and X-ray images.
  • the image normalization unit 91 may normalize the diagnostic image. For example, the image normalization unit 91 may convert the RGB value of each pixel included in the diagnostic image into a value obtained by dividing the RGB value by a predetermined value (eg, 255).
  • the size of the image may be configured differently according to the type of the diagnostic image.
  • the integrated lesion diagnosis apparatus 90 may further include an image resizing unit 92 for resizing the size of the diagnosis image to the same size.
  • the lesion area detection unit 93 may include a lesion area integrated learning model 930, and the lesion area integrated learning model 930 is constructed to receive diagnostic images corresponding to various diseases and detect the corresponding lesion area. It can be a learned model. Furthermore, the lesion area integrated learning model 930 may be a learning model built based on General Adversial Networks (GAN).
  • GAN General Adversial Networks
  • the size of the lesion area may be variously configured.
  • the image extraction unit 95 enlarges or reduces the lesion area, and enlarges or reduces the size of the lesion area.
  • the image of the lesion area can be reconstructed to a predetermined size.
  • the disease severity detection unit 97 may include a severity integrated learning model 970, wherein the severity integrated learning model 970 detects the type of disease and the severity of the disease in response to input of a lesion area image of a uniform size. It can be a trained model. Further, the severity integrated learning model 970 may include a model learned based on General Adversial Networks (GAN).
  • GAN General Adversial Networks
  • the lesion area integrated learning model 930 and the severity integrated learning model 970 may be learning models constructed through the lesion integrated learning apparatus described above with reference to FIGS. 1 to 9.
  • FIG. 10 is a flowchart illustrating a procedure of a method for learning lesion integration according to an embodiment of the present disclosure.
  • the method for learning integrated lesions according to an embodiment of the present disclosure may be performed by the above-described integrated learning apparatus for lesions.
  • the integrated lesion learning device may be configured to receive a plurality of diagnostic images taken for diagnosis of different diseases, and to learn the type of disease and the severity of the disease corresponding thereto.
  • the plurality of diagnostic images may be photographed in different formats or may have different properties according to the characteristics of the disease.
  • the diagnostic image is made of a similar type of image.
  • the diagnostic image may include magnetic resonance imaging (MRI), computerized tomography (CT) images, and X-ray images.
  • the integrated lesion learning apparatus may normalize a plurality of diagnostic images captured for diagnosis of different diseases. For example, the integrated lesion learning apparatus may convert the RGB value of each pixel included in the diagnostic image into a value obtained by dividing the RGB value by a predetermined value (eg, 255).
  • the size of the image may be configured differently according to the type of the diagnostic image. Accordingly, the integrated lesion learning apparatus may resize the size of the diagnostic image to the same size (S1020).
  • the integrated lesion learning apparatus receives diagnostic images corresponding to various diseases, and may have a learning model that detects a lesion region corresponding thereto, that is, a lesion region integrated learning model. You can learn about it. In this case, it is preferable that the integrated lesion learning device learns the integrated learning model of the lesion area based on continuous learning using General Adversial Networks (GAN).
  • GAN General Adversial Networks
  • the integrated lesion learning apparatus may perform learning on the integrated lesion learning model 300 (refer to FIG. 3), and in this case, diagnostic images 201, 202, and 203 may be used as inputs.
  • the diagnostic images 201, 202, and 203 input to the lesion area integrated learning model 300 may be images that have been normalized and resized through S1010 and S1020 described above.
  • the integrated lesion learning apparatus may provide a target variable of the integrated lesion area learning model 300 as images 211, 212, and 213 of the lesion area. Accordingly, the lesion area integrated learning model 300 can be learned to detect the images 211, 212, 213 of the lesion area corresponding to the diagnostic images 201, 202, and 203, and further, the lesion area integrated learning model Reference numeral 300 may be learned based on a convolutional neural network (CNN) technique or a pooling technique.
  • CNN convolutional neural network
  • the lesion region integrated learning model 300 may extract features of the diagnostic images 201, 202, and 203 using a general convolutional neural network (CNN) technique or a pooling technique.
  • the pooling technique may include at least one of a max pooling technique and an average pooling technique.
  • the pooling technique referred to in the present disclosure is not limited to the max pooling technique or the average pooling technique, and includes any technique for obtaining a representative value of an image region of a predetermined size.
  • the representative value used in the pooling technique may be at least one of a variance value, a standard deviation value, a mean value, a most frequent value, a minimum value, and a weighted average value, in addition to the maximum value and the average value.
  • the size of the lesion area can be variously configured, so in order to uniformly configure the size of the lesion area, the integrated lesion learning device enlarges or reduces the lesion area, The image of the lesion area may be reconstructed to a predetermined size (S1040).
  • the integrated lesion learning apparatus may construct a learning model, that is, a severity integrated learning model, for learning the type of disease and the severity of the disease in response to an input of an image of a lesion area having a uniform size.
  • a learning model that is, a severity integrated learning model
  • the lesion integrated learning device learns the severity integrated learning model based on continuous learning using GAN.
  • the integrated lesion learning apparatus may perform learning on a severity integrated learning model 400 (refer to FIG. 4), and at this time, images 211, 212, and 213 of the lesion area may be used as inputs.
  • images 211, 212, and 213 of the lesion area input to the severity integrated learning model 400 may be images reconstructed to a predetermined size through the above-described step S1040.
  • the integrated lesion learning apparatus may provide the objective variable of the integrated learning model 400 by setting the severity data 221, 222, and 223. Accordingly, the severity integrated learning model 400 learns to detect the type of disease (221a, 222a, 223a) or the severity of the disease (221b, 222b, 223b) corresponding to the images 211, 212, 213 of the lesion area. Furthermore, the severity integrated learning model 400 may be learned based on a convolutional neural network (CNN) technique or a pooling technique. For example, the severity integrated learning model 400 may extract features of the lesion area images 211, 212, 213 using a general convolutional neural network (CNN) technique or a pooling technique.
  • CNN convolutional neural network
  • the convolutional neural network used in the lesion area integrated learning model 300 or the severity integrated learning model 400 is used to extract "features" such as borders and line colors from input data (images). It may be, and may include a plurality of layers. Each layer may receive input data and may generate output data by processing the input data of the layer.
  • the convolutional neural network may output an input image or a feature map generated by convolving an input feature map with filter kernels as output data.
  • the initial layers of the convolutional neural network can be operated to extract low-level features such as edges or gradients from the input.
  • the next layers of the neural network can gradually extract more complex features, such as a prostate region or a body region such as a brain region.
  • the convolutional neural network may include a pooling layer in which a pooling operation is performed in addition to a convolutional layer in which a convolution operation is performed.
  • the pooling technique is a technique used to reduce the spatial size of data in the pooling layer.
  • the pooling technique includes a max pooling technique that selects a maximum value in a corresponding domain and an average pooling technique that selects an average value of the domain.
  • the max pooling technique is generally used. do.
  • the pooling window size and interval (stride) are set to the same value.
  • the stride refers to an interval to be moved when a filter is applied to input data, that is, an interval to which the filter is moved, and stride may also be used to adjust the size of output data.
  • FIG. 11 is a flowchart illustrating a procedure of a method for integrated lesion diagnosis according to an embodiment of the present disclosure.
  • the integrated lesion diagnosis method according to an embodiment of the present disclosure may be performed by the above-described integrated lesion diagnosis apparatus.
  • diagnostic images taken for diagnosis of a disease may be photographed in different formats or may have different properties according to the characteristics of the disease.
  • the diagnostic image may include magnetic resonance imaging (MRI), computerized tomography (CT) images, and X-ray images.
  • the integrated lesion diagnosis apparatus may normalize a plurality of diagnostic images captured for diagnosis of different diseases. For example, the integrated lesion diagnosis apparatus may convert the RGB value of each pixel included in the diagnostic image into a value obtained by dividing the RGB value by a predetermined value (eg, 255).
  • the size of the image may be configured differently according to the type of the diagnostic image.
  • the apparatus for integrated lesion diagnosis may resize the size of the diagnostic image to the same size.
  • the integrated lesion diagnosis apparatus may receive diagnostic images corresponding to various diseases and detect a lesion region corresponding thereto, and this operation may be performed through a lesion region integrated learning model.
  • the integrated learning model for the lesion area may be a learning model constructed to receive diagnostic images corresponding to various diseases and detect the lesion area corresponding thereto.
  • the lesion area integrated learning model may be a learning model built on the basis of continuous learning using General Adversial Networks (GAN).
  • GAN General Adversial Networks
  • the size of the lesion area may be variously configured.
  • the integrated lesion diagnosis apparatus enlarges or reduces the lesion area, enlarges or The image of the reduced lesion area can be reconstructed to a predetermined size.
  • the integrated lesion diagnosis apparatus may detect the type of disease and the severity of the disease in response to the input of the lesion area image.
  • the type of disease and the disease severity in response to the input of the lesion area image of a uniform size can be detected using a model trained to detect the severity of the disease.
  • the severity integrated learning model may be a model trained to detect the type of disease and the severity of the disease in response to an input of an image of a lesion area having a uniform size.
  • the severity integrated learning model may include a model learned based on continuous learning using GAN.
  • the lesion area integrated learning model and the severity integrated learning model may be a learning model constructed by the lesion integrated learning apparatus described above with reference to FIGS. 1 to 9 or the lesion integrated learning method described above with reference to FIG. 10.
  • FIG. 12 is a block diagram illustrating a method and apparatus for integrated learning of a lesion and a computing system for executing the method and apparatus for diagnosing a lesion according to an embodiment of the present disclosure.
  • the computing system 1000 includes at least one processor 1100, a memory 1300, a user interface input device 1400, a user interface output device 1500, and storage connected through a bus 1200. (1600), and a network interface (1700).
  • the processor 1100 may be a central processing unit (CPU) or a semiconductor device that processes instructions stored in the memory 1300 and/or the storage 1600.
  • the memory 1300 and the storage 1600 may include various types of volatile or nonvolatile storage media.
  • the memory 1300 may include read only memory (ROM) and random access memory (RAM).
  • the steps of the method or algorithm described in connection with the embodiments disclosed herein may be directly implemented in hardware executed by the processor 1100, a software module, or a combination of the two.
  • Software modules reside in storage media (i.e., memory 1300 and/or storage 1600) such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM. You may.
  • An exemplary storage medium is coupled to the processor 1100, which is capable of reading information from and writing information to the storage medium.
  • the storage medium may be integral with the processor 1100.
  • the processor and storage media may reside within an application specific integrated circuit (ASIC).
  • the ASIC may reside within the user terminal.
  • the processor and storage medium may reside as separate components within the user terminal.
  • exemplary methods of the present disclosure are expressed as a series of operations for clarity of description, but this is not intended to limit the order in which steps are performed, and each step may be performed simultaneously or in a different order if necessary.
  • the illustrative steps may include additional steps, other steps may be included excluding some steps, or may include additional other steps excluding some steps.
  • various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof.
  • one or more ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, or the like.
  • the scope of the present disclosure is software or machine-executable instructions (e.g., operating systems, applications, firmware, programs, etc.) that cause an operation according to the method of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium (non-transitory computer-readable medium) which stores instructions and the like and is executable on a device or a computer.
  • a non-transitory computer-readable medium non-transitory computer-readable medium

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Un appareil d'apprentissage de lésion intégré peut être fourni selon la présente invention. Le dispositif d'apprentissage de lésion intégré peut comprendre : une unité d'apprentissage basée sur image qui entraîne au moins un modèle d'apprentissage basé sur image recevant une image médicale et délivre en sortie un résultat de prédiction de lésion basé sur image ; au moins une unité d'apprentissage basée sur des données cliniques qui entraîne un modèle d'apprentissage basé sur des données cliniques qui reçoit des données cliniques et délivre en sortie un résultat de prédiction de lésion basé sur des données cliniques ; et une unité d'apprentissage intégrée pour effectuer un apprentissage de type "ensemble-learning" recevant le résultat de prédiction de lésion basé sur image et le résultat de prédiction de lésion basé sur des données cliniques, et délivrant en sortie un résultat de prédiction de lésion final.
PCT/KR2020/008583 2019-07-01 2020-07-01 Appareil et procédé pour construire un modèle d'apprentissage de lésion intégré, et appareil et procédé pour diagnostiquer une lésion à l'aide d'un modèle d'apprentissage de lésion intégré WO2021002669A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190079000A KR102100699B1 (ko) 2019-07-01 2019-07-01 병변 통합 학습 모델을 구축하는 장치와 방법, 및 상기 병변 통합 학습 모델을 사용하여 병변을 진단하는 장치와 방법
KR10-2019-0079000 2019-07-01

Publications (1)

Publication Number Publication Date
WO2021002669A1 true WO2021002669A1 (fr) 2021-01-07

Family

ID=70454610

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/008583 WO2021002669A1 (fr) 2019-07-01 2020-07-01 Appareil et procédé pour construire un modèle d'apprentissage de lésion intégré, et appareil et procédé pour diagnostiquer une lésion à l'aide d'un modèle d'apprentissage de lésion intégré

Country Status (2)

Country Link
KR (1) KR102100699B1 (fr)
WO (1) WO2021002669A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017771B (zh) * 2020-08-31 2024-02-27 吾征智能技术(北京)有限公司 一种基于精液常规检查数据的疾病预测模型的构建方法及系统
KR102320431B1 (ko) 2021-04-16 2021-11-08 주식회사 휴런 의료 영상 기반 종양 검출 및 진단 장치

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011212094A (ja) * 2010-03-31 2011-10-27 Fujifilm Corp 診断支援システム、診断支援装置、診断支援方法および診断支援プログラム
KR20170140757A (ko) * 2016-06-10 2017-12-21 한국전자통신연구원 임상 의사결정 지원 앙상블 시스템 및 이를 이용한 임상 의사결정 지원 방법
KR20180040287A (ko) * 2016-10-12 2018-04-20 (주)헬스허브 기계학습을 통한 의료영상 판독 및 진단 통합 시스템
KR101974786B1 (ko) * 2018-08-17 2019-05-31 (주)제이엘케이인스펙션 뇌동맥류 병변의 특성을 이용한 중증도 및 예후 예측 방법 및 시스템

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011212094A (ja) * 2010-03-31 2011-10-27 Fujifilm Corp 診断支援システム、診断支援装置、診断支援方法および診断支援プログラム
KR20170140757A (ko) * 2016-06-10 2017-12-21 한국전자통신연구원 임상 의사결정 지원 앙상블 시스템 및 이를 이용한 임상 의사결정 지원 방법
KR20180040287A (ko) * 2016-10-12 2018-04-20 (주)헬스허브 기계학습을 통한 의료영상 판독 및 진단 통합 시스템
KR101974786B1 (ko) * 2018-08-17 2019-05-31 (주)제이엘케이인스펙션 뇌동맥류 병변의 특성을 이용한 중증도 및 예후 예측 방법 및 시스템

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Non-official translation: KAKAO Policy Industry Research. AI Medical Imaging Technology Use Cases. [online]. 28 July 2017, [Retrieved on 10 October 2019], Retrieved from <https://brunch.co.kr/@kakao-it/81>. See pages 2-7. *

Also Published As

Publication number Publication date
KR102100699B1 (ko) 2020-04-16

Similar Documents

Publication Publication Date Title
WO2020242239A1 (fr) Système de prise en charge de diagnostic basé sur l&#39;intelligence artificielle utilisant un algorithme d&#39;apprentissage d&#39;ensemble
WO2019103440A1 (fr) Procédé permettant de prendre en charge la lecture d&#39;une image médicale d&#39;un sujet et dispositif utilisant ce dernier
WO2021002669A1 (fr) Appareil et procédé pour construire un modèle d&#39;apprentissage de lésion intégré, et appareil et procédé pour diagnostiquer une lésion à l&#39;aide d&#39;un modèle d&#39;apprentissage de lésion intégré
WO2019231104A1 (fr) Procédé de classification d&#39;images au moyen d&#39;un réseau neuronal profond et appareil utilisant ledit procédé
WO2021071288A1 (fr) Procédé et dispositif de formation de modèle de diagnostic de fracture
WO2020180135A1 (fr) Appareil et procédé de prédiction de maladie du cerveau, et appareil d&#39;apprentissage pour prédire une maladie du cerveau
WO2022059969A1 (fr) Procédé de pré-apprentissage de réseau neuronal profond permettant une classification de données d&#39;électrocardiogramme
WO2020076133A1 (fr) Dispositif d&#39;évaluation de validité pour la détection de région cancéreuse
WO2019143021A1 (fr) Procédé de prise en charge de visualisation d&#39;images et appareil l&#39;utilisant
WO2021071286A1 (fr) Procédé et dispositif d&#39;apprentissage d&#39;images médicales basés sur un réseau contradictoire génératif
WO2019124836A1 (fr) Procédé de mappage d&#39;une région d&#39;intérêt d&#39;une première image médicale sur une seconde image médicale, et dispositif l&#39;utilisant
WO2019132588A1 (fr) Dispositif et procédé d&#39;analyse d&#39;image basés sur une caractéristique d&#39;image et un contexte
WO2022131642A1 (fr) Appareil et procédé pour déterminer la gravité d&#39;une maladie sur la base d&#39;images médicales
WO2019189972A1 (fr) Méthode d&#39;analyse d&#39;image d&#39;iris par l&#39;intelligence artificielle de façon à diagnostiquer la démence
WO2022197044A1 (fr) Procédé de diagnostic de lésion de la vessie utilisant un réseau neuronal, et système associé
WO2022231200A1 (fr) Procédé d&#39;entraînement pour l&#39;entraînement d&#39;un réseau de neurones artificiels pour déterminer une zone de lésion du cancer du sein, et système informatique le réalisant
WO2024049208A1 (fr) Dispositif et procédé de mesure de la distribution d&#39;air dans l&#39;abdomen
WO2021201582A1 (fr) Procédé et dispositif permettant d&#39;analyser des causes d&#39;une lésion cutanée
WO2020246676A1 (fr) Système de diagnostic automatique du cancer du col de l&#39;utérus
WO2020101428A1 (fr) Dispositif de détection de zone de lésion, procédé de détection de zone de lésion, et programme d&#39;ordinateur
KR102036052B1 (ko) 인공지능 기반으로 비규격화 피부 이미지의 의료 영상 적합성을 판별 및 변환하는 장치
WO2023075303A1 (fr) Système d&#39;aide au diagnostic endoscopique basé sur l&#39;intelligence artificielle et son procédé de commande
WO2023182702A1 (fr) Dispositif et procédé de traitement de données de diagnostic par intelligence artificielle pour des images numériques de pathologie
CN113096132A (zh) 图像处理的方法、装置、存储介质和电子设备
WO2021015489A2 (fr) Procédé et dispositif d&#39;analyse d&#39;une zone d&#39;image singulière à l&#39;aide d&#39;un codeur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20835311

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.06.2022).

122 Ep: pct application non-entry in european phase

Ref document number: 20835311

Country of ref document: EP

Kind code of ref document: A1