WO2021071286A1 - Generative adversarial network-based medical image learning method and device - Google Patents

Generative adversarial network-based medical image learning method and device Download PDF

Info

Publication number
WO2021071286A1
WO2021071286A1 PCT/KR2020/013739 KR2020013739W WO2021071286A1 WO 2021071286 A1 WO2021071286 A1 WO 2021071286A1 KR 2020013739 W KR2020013739 W KR 2020013739W WO 2021071286 A1 WO2021071286 A1 WO 2021071286A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
learning
image
data set
medical
Prior art date
Application number
PCT/KR2020/013739
Other languages
French (fr)
Korean (ko)
Inventor
김원태
강신욱
이명재
김동민
남동연
Original Assignee
(주)제이엘케이
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)제이엘케이 filed Critical (주)제이엘케이
Publication of WO2021071286A1 publication Critical patent/WO2021071286A1/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • the present disclosure relates to a technology for learning a deep learning model, and more specifically, to a method and an apparatus for learning about a lesion through unsupervised learning.
  • Deep learning is to learn a very large amount of data, and when new data is input, the highest probability is selected based on the learning result.
  • Such deep learning can operate adaptively according to an image, and since it automatically finds a characteristic factor in the process of learning a model based on data, there are increasing attempts to utilize this in the field of artificial intelligence in recent years.
  • the technical task of the present disclosure is to provide a method and apparatus for learning a medical image based on a Generative Adversarial Network (GAN) capable of constructing a high-performance learning model using a small number of labeling data.
  • GAN Generative Adversarial Network
  • a GAN-based medical image learning apparatus may be provided.
  • the apparatus includes a learning data set management unit that manages a learning data set including a first medical image, labeling data, and a filtered medical image, and a generator that generates a second medical image.
  • a generator learning unit that manages learning of (Generator), and a discreminator configuring the labeling data by using the medical image filtered from the first medical image, the labeling data, and the second image generated through the generator.
  • a lesion learning unit having a discreminator learning unit that manages learning of (Discriminator), wherein the learning data set management unit filters the second medical image to selectively provide a second medical image as the filtered medical image.
  • a filtering unit may be provided.
  • a GAN-based medical image learning method may be provided.
  • the method is a method of learning a learning model for lesion detection, the process of managing and storing a learning data set including a first medical image, labeling data, and a filtered medical image, and generating a second medical image.
  • a discriminator constituting the labeling data by using a generator to perform the labeling data, and a medical image filtered from the first medical image, labeling data, and the second image generated through the generator.
  • the process of performing the learning of the lesion learning model includes a process of detecting and providing a second medical image generated by the generator, and managing the training data set
  • the storing process may include a process of filtering the second medical image and selectively storing and managing it as the filtered medical image.
  • a GAN-based medical image learning method and apparatus capable of constructing a high-performance learning model using a small number of labeling data may be provided.
  • FIG. 1 is a block diagram showing the configuration of a GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • FIGS. 2A and 2B are diagrams illustrating an operation of learning a lesion learning model by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • FIG. 3 is a diagram illustrating an operation of restoring an error image by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • FIGS. 4A and 4B are diagrams illustrating a medical image in a normal state and a medical image in an abnormal state used in the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart illustrating a procedure of a GAN-based medical image learning method according to an embodiment of the present disclosure.
  • FIG. 6 is a block diagram illustrating a computing system that executes a GAN-based medical image learning method and apparatus according to an embodiment of the present disclosure.
  • a component when a component is said to be “connected”, “coupled” or “connected” with another component, it is not only a direct connection relationship, but also an indirect connection relationship in which another component exists in the middle. It can also include.
  • a certain component when a certain component “includes” or “have” another component, it means that other components may be further included rather than excluding other components unless otherwise stated. .
  • first and second are used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise noted. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is referred to as a first component in another embodiment. It can also be called.
  • components that are distinguished from each other are intended to clearly describe each feature, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated into one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Therefore, even if not stated otherwise, such integrated or distributed embodiments are also included in the scope of the present disclosure.
  • components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other elements in addition to the elements described in the various embodiments are included in the scope of the present disclosure.
  • FIG. 1 is a block diagram showing the configuration of a GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • the GAN-based medical image learning apparatus may include a lesion learning unit 10 and a learning data set management unit 15.
  • the lesion learning unit 10 is a component that processes learning about the lesion learning model 110, and processes learning about the lesion learning model 110 using the learning data set provided by the learning data set management unit 15. do.
  • the lesion learning model 110 may be a learning model built through GAN-based quasi-supervised learning, and may be composed of a combination of a generator and a discriminator.
  • the lesion learning unit 10 may include a generator learning unit 11 that manages learning of the generator and a discreminator learning unit 12 that manages the learning of the discreminator.
  • the generator learning unit 11 may perform learning so that the generator can generate a medical image.
  • the discreminator learning unit 12 includes a medical image generated by the generator (hereinafter, referred to as a'second medical image') and a medical image provided by the learning data set management unit 15 (hereinafter, referred to as a'first medical image'). Medical images') and labeling data may be input, and the learning may be performed to determine labeling data among the received images.
  • the lesion learning model 110 may include a lesion region detection learning model 111 and a lesion diagnosis learning model 113.
  • the lesion region detection learning model 111 may be a learning model that receives a medical image and detects and outputs a lesion region in the medical image.
  • the lesion diagnosis learning model 113 may be a learning model that receives a region in which a lesion is generated provided from the lesion region detection learning model 111 and determines and outputs a disease type and disease severity.
  • the generator learning unit 11 may perform learning so that the generator provided in the lesion region detection learning model 111 can generate a second medical image.
  • the disc limiter learning unit 12 may perform learning so that the disc limiter receives the first medical image, the second medical image, and labeling data, and identifies the labeling data.
  • the labeling data may be an image in which a region in which a lesion occurs is detected in the first medical image.
  • the generator learning unit 11 may perform learning so that the generator provided in the lesion diagnosis learning model 113 can generate an image of an area where a lesion occurs, and the discreminator learning unit 12
  • the reminator receives the image of the area where the lesion occurs (e.g., the image of the area where the lesion is generated provided by the generator, the image of the area where the lesion is generated provided from the training data set, etc.), and labeling data. Learning can be performed to identify the data.
  • the labeling data may be an image in which the type of disease and the severity of the disease are detected based on the image of the area where the lesion occurs.
  • the generator learning unit 11 resizes the image of the lesion-prone area provided by the generator according to the size and resolution of the first and second medical images, and then provides the resized image to the discreminator for learning. You can handle it.
  • the generator while performing the learning of the lesion learning model 110, the generator constructs a predetermined image and uses it for learning of the discreminator.
  • the lesion learning apparatus according to an embodiment of the present disclosure is configured to selectively use the image generated by the generator for learning of the discreminator.
  • the generator learning unit 11 does not deliver all of the second medical images generated by the generator to the discreminator, but delivers them to the learning data set management unit 15, and then the learning data set management unit 15 provides It may be configured to select an image and provide it to the discreminator.
  • the generator learning unit 11 provides the second medical image generated in the learning process of the lesion learning model 110 to the learning data set management unit 15.
  • the learning data set management unit 15 may include a second medical image filtering unit 16 that selects a second medical image.
  • the training data set management unit 15 may include a histogram verification unit 16 that checks the histogram of the second medical image, and the histogram verification unit 16 is a second generation unit provided by the generator learning unit 11.
  • the histogram of the medical image may be checked and provided to the second medical image filtering unit 16.
  • the histogram checking unit 16 may check the histogram of the first medical image 151 included in the training data set 150 and provide the result to the second medical image filtering unit 16.
  • the second medical image filtering unit 16 compares the histogram for the first medical image 151 and the histogram for the second medical image to filter only the image corresponding to the first medical image.
  • the second medical image filtering unit 16 constructs at least one reference histogram information using the histogram for the first medical image 151, and includes at least one reference histogram information and the second medical image.
  • a second medical image exceeding at least one reference histogram information may be determined as the filtered image.
  • the second medical image filtering unit 16 may provide the determined image, that is, the filtered image, and the learning data set management unit 15 may store this as the filtered image 153 in the learning data set 150. have.
  • the learning data set 150 may include the first medical image 151, the labeling data 152, and the filtered image 153, and the discreminator learning unit 12 May perform discreminator learning of the lesion learning model 110 using the first medical image 151, the labeling data 152, and the filtered image 153 included in the training data set 150. have.
  • the learning data set management unit 15 may further include an error image management unit 18 that checks and manages whether or not an image of the first medical image has an error.
  • the error image management unit 18 may include an error image check unit 18a for checking whether an error exists in the first medical image, and an error image restoration unit 18b for restoring the error image.
  • the error image verification unit 18a analyzes the first medical image and checks whether or not there is an error. For example, the error image verification unit 18a may check the histogram of the first medical image 151 and determine whether or not there is an error in the first medical image 151 by comparing it with a predetermined criterion.
  • the error image restoration unit 18b may restore the second medical image to an image that replaces the first medical image 151 in which an error has occurred. For example, the error image restoration unit 18b may detect a second medical image having high similarity by comparing the histogram of the second medical image with the histogram of the first medical image 151 in which an error has occurred. The error image restoration unit 18b may restore the detected second medical image to the first medical image 151 in which an error has occurred.
  • the learning data set management unit 15 may check the number of labeling data included in the learning data set 150 and, if less than a predetermined number or ratio, configure and store the labeling data.
  • the learning data set management unit 15 may include a learning model (not shown) for learning labeling data 152 corresponding to the first medical image 151.
  • the learning model (not shown) may be configured based on supervised learning.
  • the learning data set management unit 15 can generate labeling data by inputting the first medical image 151 into the learning model (not shown), and the labeling data generated in this way has a relatively high accuracy. It can fall. In consideration of this, the learning data set management unit 15 checks the probability value of the labeling data output by the learning model (not shown), selects labeling data having a relatively high probability value, and labels the learning data set 150 It can be stored as data 152.
  • FIGS. 2A and 2B are diagrams illustrating an operation of learning a lesion learning model by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • FIG. 2A shows an operation of learning a lesion region detection learning model 111 by a GAN-based medical image learning apparatus.
  • the discreminator 21 learns the characteristics of the lesion area by proactively learning the lesion area.
  • the generator 22 may generate the second medical image 202.
  • the second medical image 202 generated in this way may be provided to the learning data set management unit 23, and the learning data set management unit 23 checks the histogram of the second medical image 202, and based on this, the second medical image 202 Filtering on the medical image 202 may be performed.
  • the learning data set management unit 23 may store and manage the filtered second medical image 203 in the learning data set.
  • the learning data set management unit 23 may provide the first medical image 201, the second medical image 203, and the first labeling data 205 to the disc limiter 21, and the disc limiter
  • the navigator 21 may be trained to classify the first labeling data 205 among the provided data 201, 203, and 205.
  • the first labeling data 205 may be data obtained by extracting a lesion area from a medical image.
  • the lesion region detection learning model 111 can construct a GAN-based unsupervised learning model through the discreminator 21 and the generator 22, and detects a lesion region corresponding thereto when a medical image is input. Can be printed.
  • the lesion learning unit 10 provides the first labeling data 205, that is, data extracted from the medical image, to the lesion diagnosis learning model 113 when learning the lesion area detection learning model 111. , It can be configured to link the learning of the lesion region detection learning model 111 and the lesion diagnosis learning model 113.
  • the discreminator 21 learns characteristics of the disease type and disease severity by proactively learning about the type of disease and the severity of the disease.
  • the generator 22 may generate the fourth medical image 212.
  • the fourth medical image 212 generated by the generator 22 may be an image from which a lesion area is extracted.
  • the fourth medical image 212 generated through the above-described operation may be provided to the learning data set management unit 23, and the learning data set management unit 23 checks the histogram of the fourth medical image 212, and Based on the filtering, the fourth medical image 212 may be filtered.
  • the learning data set management unit 23 may store and manage the filtered fourth medical image 213 in the learning data set.
  • the learning data set management unit 23 may provide the third medical image 211, the fourth medical image 213, and the second labeling data 215 to the disc limiter 21, and
  • the navigator 21 may be trained to classify the second labeling data 215 among the provided data 211, 213, and 215.
  • the second labeling data 215 may be data obtained by extracting a disease type and a disease severity from an image of a lesion area.
  • the lesion region provided by the lesion region detection learning model 111 may be used as an input to the lesion diagnosis learning model 113.
  • the lesion diagnosis learning model 113 can construct a GAN-based unsupervised learning model through the discreminator 21 and the generator 22, and the disease type corresponding thereto when an image of the lesion area is input, Disease severity, etc. can be detected and output.
  • FIG. 3 is a diagram illustrating an operation of restoring an error image by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • the learning data set management unit 15 requests the error image verification unit 18a to check the error of the first medical image 151 in order to check whether the first medical image 151 has an error.
  • the error image verification unit 18a may check the histogram information of the first medical image 151 (301), and check whether the first medical image 151 is in error based on the confirmed histogram information. (302).
  • the histogram of the medical image 410 in a normal state may be evenly distributed, but when an error occurs in the generation, storage, or transmission of the medical image, image information for a part of the area is lost. Can be. Accordingly, when the medical image 420 is configured using the lost information, the lost information cannot be restored to the image information, so the corresponding area 425 may be configured as an abnormal area. In this way, when a medical image including the abnormal region 425 is configured, the histogram of the medical image 420 including the abnormal region 425 is not evenly distributed, and the distribution for a specific color is concentrated. Can be.
  • the error image verification unit 18a checks the histogram information of the first medical image 151, and when the corresponding histogram information indicates a distribution concentrated on a specific color, that is, the histogram of a specific color is If it exceeds the predetermined threshold, it may be determined that an error exists in the medical image.
  • the error image restoration unit 18b may be configured to replace the second medical image generated in the learning process of the lesion learning unit 10 with a first medical image in which an error exists. Specifically, the similarity of the histogram of the first medical image and the second medical image is calculated (303), and when the calculated similarity exceeds a predetermined threshold, the second medical image is restored to be replaced with the first medical image. It can be determined by the image (304).
  • the error image restoration unit 18b calculates a histogram in the horizontal and vertical directions in units of a predetermined line area of the first medical image, and detects a line area in which the histogram of a specific color exceeds a predetermined threshold. can do. Based on this, the error image restoration unit 18b may detect an error area in a horizontal direction and a vertical direction, and may distinguish an error area from a non-error area. Thereafter, the error image restoration unit 18b checks the histogram of the first medical image and the second medical image based on the non-error region, calculates the similarity of the histogram, and the calculated similarity exceeds a predetermined threshold. After extracting the second medical image, it may be replaced with the first medical image.
  • the operation of confirming an error image and determining a reconstructed image using the first and second medical images is exemplified, the present disclosure is not limited thereto, and may be variously changed or applied.
  • the operation of checking the error image and determining the reconstructed image may be applied to the third and fourth medical images.
  • FIG. 5 is a flowchart illustrating a procedure of a GAN-based medical image learning method according to an embodiment of the present disclosure.
  • the GAN-based medical image learning method may be performed by the GAN-based medical image learning apparatus described above.
  • the apparatus for learning a medical image may configure and store a learning data set.
  • the training data set is a data set for learning a lesion learning model, and may include a medical image and labeling data.
  • the lesion learning model may include a lesion region detection learning model and a lesion diagnosis learning model.
  • the lesion region detection learning model is a learning model that receives a medical image and detects and outputs a lesion area in the medical image.
  • the lesion diagnosis learning model may be a learning model that receives a region in which a lesion is generated provided from a lesion region detection learning model, determines a type of disease, a severity of a disease, and the like, and outputs it. Based on this, the medical image and the labeling data may be configured such that data provided to the lesion region detection learning model and the lesion diagnosis learning model are classified.
  • the first medical image provided to the lesion region detection learning model may be a medical image photographing a patient's body
  • the first labeling data may be an image that detects an area where a lesion occurs in the first medical image.
  • the medical image provided to the lesion diagnosis learning model may be an image (hereinafter referred to as a third medical image) in which the image of the lesion area is resized according to the size and resolution of the first medical image
  • the second labeling The data may be an image in which the type of disease and the severity of the disease are detected based on the image of the area where the lesion occurs.
  • the apparatus for learning a medical image may check and manage whether the first medical image (or the third medical image) is an error. Specifically, the apparatus for learning a medical image checks whether or not there is an error by analyzing the first medical image (or the third medical image). For example, the medical image learning apparatus may check the histogram of the first medical image (or the third medical image) and determine whether or not there is an error in the first medical image (or the third medical image) by comparing it with a predetermined criterion. .
  • the apparatus for learning a medical image may restore the second medical image to an image that replaces the first medical image (or the third medical image) in which an error has occurred.
  • the error image medical image learning apparatus compares the histogram of the second medical image (or the fourth medical image) with the histogram of the first medical image (or the third medical image) in which the error has occurred, and has a high similarity. 2
  • a medical image (or a fourth medical image) can be detected.
  • the apparatus for learning a medical image may restore the detected second medical image (or fourth medical image) to a first medical image (or a third medical image) in which an error has occurred.
  • the medical image learning apparatus may check the number of labeling data included in the training data set, and if it is included in less than a predetermined number or ratio, configure and store the labeling data.
  • the apparatus for learning medical images may include a learning model (not shown) for learning labeling data corresponding to a first medical image.
  • the learning model (not shown) may be configured based on supervised learning.
  • the apparatus for learning medical images may generate labeling data by inputting the first medical image to the learning model (not shown), and the labeling data generated in this manner may be relatively inferior in accuracy.
  • the apparatus for learning a medical image may check a probability value of labeling data output by a learning model (not shown), select labeling data having a relatively high probability value, and store it as labeling data of a training data set.
  • step S520 the apparatus for learning a medical image processes learning about a lesion learning model by using the learning data set.
  • the lesion learning model may be a learning model built through GAN-based quasi-supervised learning, and may be composed of a combination of a generator and a discriminator.
  • the apparatus for learning a medical image may perform learning of the generator and the discreminator, respectively.
  • the apparatus for learning a medical image may perform learning so that a generator can generate a medical image.
  • the apparatus for learning a medical image may perform learning of the discreminator.
  • the apparatus for learning a medical image may be trained to receive a medical image generated by a generator, a medical image of a learning data set, and labeling data, and to determine labeling data from the received images.
  • the medical image learning apparatus may perform learning so that the generator provided in the lesion region detection learning model generates a second medical image, and in step S522 At, the discreminator may receive the first medical image, the second medical image, and the first labeling data, and perform learning so that the first labeling data can be identified.
  • the first labeling data may be an image obtained by detecting a region in which a lesion occurs in the first medical image.
  • the medical image learning apparatus may perform learning so that the generator provided in the lesion diagnosis learning model can generate an image (hereinafter, referred to as a'fourth medical image') of the area where the lesion occurs.
  • the discreminator receives the fourth medical image, the image (ie, the third medical image) of the lesion-prone area provided from the training data set, and the second labeling data, and receives the second labeling data. Learning can be performed so that you can identify it.
  • the second labeling data may be an image in which the type of disease and the severity of the disease are detected based on the image of the area where the lesion occurs.
  • the medical image learning apparatus resizes the image of the lesion-prone area provided by the generator according to the size and resolution of the first and second medical images, and then provides the resized image to the discreminator to process the learning. I can.
  • the generator while performing the learning of the lesion learning model, it is exemplified that the generator constructs a predetermined image and uses it for learning of the discreminator.
  • the apparatus for learning a medical image is configured to selectively use the image generated by the generator for learning of the discreminator.
  • the medical image learning apparatus does not deliver all the second medical images (or fourth medical images) generated by the generator to the discreminator, but selectively transmits the second medical images (or fourth medical images) to the disc limiter. It can be configured to provide to the nator.
  • the apparatus for learning a medical image may filter the medical image generated in step S520. Specifically, an operation of checking the second medical image generated in the learning process of the lesion region detection learning model and selecting the second medical image may be performed.
  • the apparatus for learning a medical image may check the histogram of the first medical image and the histogram of the second medical image, compare them, and filter only the image corresponding to the first medical image.
  • the medical image learning apparatus configures at least one reference histogram information by using the histogram for the first medical image, compares the at least one reference histogram information with the histogram for the second medical image, The second medical image exceeding the reference histogram information may be determined as the filtered image.
  • the apparatus for learning a medical image may check the fourth medical image generated during the learning process of the lesion diagnosis learning model, and may perform an operation of selecting the fourth medical image.
  • the apparatus for learning a medical image may check the histogram of the third medical image and the histogram of the fourth medical image, compare them, and filter only the image corresponding to the third medical image.
  • the medical image learning apparatus configures at least one reference histogram information using the histogram for the third medical image, compares the at least one reference histogram information with the histogram for the fourth medical image, The fourth medical image exceeding the reference histogram information may be determined as the filtered image.
  • the apparatus for learning a medical image may configure and provide the determined image, that is, the filtered image as learning data (S540).
  • the apparatus for learning a medical image may proceed to step S510 to store the filtered image in the learning data set.
  • the learning data set may include the first medical image, the third medical image, the first labeling data, the second labeling data, the filtered second medical image, and the filtered fourth medical image.
  • the apparatus for learning a medical image may perform discreminator learning of a lesion learning model using data included in the training data set.
  • FIG. 6 is a block diagram illustrating a computing system that executes a GAN-based medical image learning method and apparatus according to an embodiment of the present disclosure.
  • the computing system 1000 includes at least one processor 1100, a memory 1300, a user interface input device 1400, a user interface output device 1500, and storage connected through a bus 1200. (1600), and a network interface (1700).
  • the processor 1100 may be a central processing unit (CPU) or a semiconductor device that processes instructions stored in the memory 1300 and/or the storage 1600.
  • the memory 1300 and the storage 1600 may include various types of volatile or nonvolatile storage media.
  • the memory 1300 may include read only memory (ROM) and random access memory (RAM).
  • the steps of the method or algorithm described in connection with the embodiments disclosed herein may be directly implemented in hardware executed by the processor 1100, a software module, or a combination of the two.
  • the software module resides in a storage medium such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM (i.e., memory 1300 and/or storage 1600). You may.
  • An exemplary storage medium is coupled to the processor 1100, which can read information from and write information to the storage medium.
  • the storage medium may be integral with the processor 1100.
  • the processor and storage media may reside within an application specific integrated circuit (ASIC).
  • the ASIC may reside within the user terminal.
  • the processor and storage medium may reside as separate components within the user terminal.
  • the exemplary methods of the present disclosure are expressed as a series of operations for clarity of description, this is not intended to limit the order in which steps are performed, and each step may be performed simultaneously or in a different order if necessary.
  • the exemplary steps may include additional steps, other steps may be included excluding some steps, or may include additional other steps excluding some steps.
  • various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof.
  • one or more ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, or the like.
  • the scope of the present disclosure is software or machine-executable instructions (e.g., operating systems, applications, firmware, programs, etc.) that cause an operation according to the method of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium (non-transitory computer-readable medium) which stores instructions and the like and is executable on a device or a computer.
  • a non-transitory computer-readable medium non-transitory computer-readable medium

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A GAN-based medical image learning device can be provided according to the present invention. The GAN-based medical image learning device comprises: a learning data set management unit for managing a learning data set comprising a first medical image, labeled data and a filtered medical image; and a lesion learning unit having a generator learning part, for managing learning of a generator for generating a second medical image, and a discriminator learning part for managing learning of a discriminator, for constructing the labeled data, by means of the first medical image, the labeled data and the medical image filtered from the second image generated by means of the generator, wherein the learning data set management unit can have a second medical image filtering part for filtering the second medical image and selectively providing same as the filtered medical image.

Description

생성적 적대 신경망 기반의 의료영상 학습 방법 및 장치A method and apparatus for learning medical images based on generative adversarial neural networks
본 개시는 딥러닝 모델 학습 기술에 관한 것이며, 보다 구체적으로는 비지도 학습을 통해 병변에 대한 학습을 수행하는 방법과 장치에 대한 것이다.The present disclosure relates to a technology for learning a deep learning model, and more specifically, to a method and an apparatus for learning about a lesion through unsupervised learning.
딥러닝(deep learning)은 매우 방대한 양의 데이터를 학습하여, 새로운 데이터가 입력될 경우 학습 결과를 바탕으로 확률적으로 가장 높은 답을 선택하는 것이다. 이러한, 딥러닝은 영상에 따라 적응적으로 동작할 수 있으며, 데이터에 기초하여 모델을 학습하는 과정에서 특성인자를 자동으로 찾아내기 때문에 최근 인공 지능 분야에서 이를 활용하려는 시도가 늘어나고 있는 추세이다.Deep learning is to learn a very large amount of data, and when new data is input, the highest probability is selected based on the learning result. Such deep learning can operate adaptively according to an image, and since it automatically finds a characteristic factor in the process of learning a model based on data, there are increasing attempts to utilize this in the field of artificial intelligence in recent years.
그러나, 학습된 모델이 정확한 정확한 결과를 도출하기 위해서는, 대용량의 데이터 학습이 요구된다. However, in order for the learned model to derive accurate and accurate results, large-scale data learning is required.
특히, 인공지능 기술을 의료분야에 적용하기 위해서는, 전문가에 의해 확인된 대량의 데이터(즉, 라벨링 데이터)가 필수적으로 요구되나, 시간 및 비용적인 문제로 인하여 전문가에 의해 확인된 대량의 데이터를 구축하기가 용이하지 않은 문제가 있다. In particular, in order to apply artificial intelligence technology to the medical field, a large amount of data confirmed by an expert (i.e., labeling data) is required, but a large amount of data confirmed by an expert is constructed due to time and cost issues. There is a problem that is not easy to do.
본 개시의 기술적 과제는 소수의 라벨링 데이터를 사용하여 고성능의 학습모델을 구축할 수 있는 GAN(Generative Adversarial Network) 기반의 의료영상 학습 방법 및 장치를 제공하는 것이다.The technical task of the present disclosure is to provide a method and apparatus for learning a medical image based on a Generative Adversarial Network (GAN) capable of constructing a high-performance learning model using a small number of labeling data.
본 개시에서 이루고자 하는 기술적 과제들은 이상에서 언급한 기술적 과제들로 제한되지 않으며, 언급하지 않은 또 다른 기술적 과제들은 아래의 기재로부터 본 개시가 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.The technical problems to be achieved in the present disclosure are not limited to the technical problems mentioned above, and other technical problems that are not mentioned will be clearly understood by those of ordinary skill in the technical field to which the present disclosure belongs from the following description. I will be able to.
본 개시의 일 양상에 따르면, GAN 기반의 의료영상 학습 장치가 제공될 수 있다. 상기 장치는, 의료영상 학습모델을 학습하는 장치에 있어서, 제1의료영상, 라벨링 데이터, 및 필터링된 의료영상을 포함하는 학습 데이터 셋을 관리하는 학습 데이터 셋 관리부, 제2의료영상을 생성하는 제너레이터(Generator)의 학습을 관리하는 제너레이터 학습부와, 상기 제1의료영상, 라벨링 데이터, 및 상기 제너레이터를 통해 생성된 제2영상으로부터 필터링된 의료영상을 사용하여, 상기 라벨링 데이터를 구성하는 디스크리미네이터(Discriminator)의 학습을 관리하는 디스크리미네이터 학습부를 구비하는 병변 학습부를 포함하고, 상기 학습 데이터 셋 관리부는, 상기 제2의료영상을 필터링하여 선택적으로 상기 필터링된 의료영상으로서 제공하는 제2의료영상 필터링부를 구비할 수 있다. According to an aspect of the present disclosure, a GAN-based medical image learning apparatus may be provided. In the apparatus for learning a medical image learning model, the apparatus includes a learning data set management unit that manages a learning data set including a first medical image, labeling data, and a filtered medical image, and a generator that generates a second medical image. A generator learning unit that manages learning of (Generator), and a discreminator configuring the labeling data by using the medical image filtered from the first medical image, the labeling data, and the second image generated through the generator. A lesion learning unit having a discreminator learning unit that manages learning of (Discriminator), wherein the learning data set management unit filters the second medical image to selectively provide a second medical image as the filtered medical image. A filtering unit may be provided.
본 개시의 다른 양상에 따르면, GAN 기반의 의료영상 학습 방법이 제공될 수 있다. 상기 방법은, 병변 검출을 위한 학습모델을 학습하는 방법에 있어서, 제1의료영상, 라벨링 데이터, 및 필터링된 의료영상을 포함하는 학습 데이터 셋을 관리 및 저장하는 과정과, 제2의료영상을 생성하는 제너레이터(Generator)와, 상기 제1의료영상, 라벨링 데이터, 및 상기 제너레이터를 통해 생성된 제2영상으로부터 필터링된 의료영상을 사용하여, 상기 라벨링 데이터를 구성하는 디스크리미네이터(Discriminator)를 포함하는 병변학습 모델의 학습을 수행하는 과정을 포함하고, 상기 병변학습 모델의 학습을 수행하는 과정은, 상기 제너레이터에서 생성된 제2의료영상을 검출 및 제공하는 과정을 포함하고, 상기 학습 데이터 셋을 관리 및 저장하는 과정은, 상기 제2의료영상을 필터링하여 선택적으로 상기 필터링된 의료영상으로서 저장 및 관리하는 과정을 구비할 수 있다. According to another aspect of the present disclosure, a GAN-based medical image learning method may be provided. The method is a method of learning a learning model for lesion detection, the process of managing and storing a learning data set including a first medical image, labeling data, and a filtered medical image, and generating a second medical image. And a discriminator constituting the labeling data by using a generator to perform the labeling data, and a medical image filtered from the first medical image, labeling data, and the second image generated through the generator. Including the process of performing the learning of the lesion learning model, the process of performing the learning of the lesion learning model includes a process of detecting and providing a second medical image generated by the generator, and managing the training data set And the storing process may include a process of filtering the second medical image and selectively storing and managing it as the filtered medical image.
본 개시에 대하여 위에서 간략하게 요약된 특징들은 후술하는 본 개시의 상세한 설명의 예시적인 양상일 뿐이며, 본 개시의 범위를 제한하는 것은 아니다.Features briefly summarized above with respect to the present disclosure are only exemplary aspects of the detailed description of the present disclosure described below, and do not limit the scope of the present disclosure.
본 개시에 따르면, 소수의 라벨링 데이터를 사용하여 고성능의 학습모델을 구축할 수 있는 GAN 기반의 의료영상 학습 방법 및 장치가 제공될 수 있다. According to the present disclosure, a GAN-based medical image learning method and apparatus capable of constructing a high-performance learning model using a small number of labeling data may be provided.
본 개시에서 얻을 수 있는 효과는 이상에서 언급한 효과들로 제한되지 않으며, 언급하지 않은 또 다른 효과들은 아래의 기재로부터 본 개시가 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.The effects obtainable in the present disclosure are not limited to the above-mentioned effects, and other effects not mentioned may be clearly understood by those of ordinary skill in the art from the following description. will be.
도 1은 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 장치의 구성을 나타내는 블록도이다.1 is a block diagram showing the configuration of a GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
도 2a 및 도 2b는 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 장치에 의해 병변 학습모델을 학습하는 동작을 설명하는 도면이다. 2A and 2B are diagrams illustrating an operation of learning a lesion learning model by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
도 3은 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 장치에 의해 오류영상을 복원하는 동작을 설명하는 도면이다. 3 is a diagram illustrating an operation of restoring an error image by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
도 4a 및 도 4b는 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 장치에서 사용되는 정상적인 상태의 의료영상 및 비정상 상태의 의료 영상을 예시하는 도면이다.4A and 4B are diagrams illustrating a medical image in a normal state and a medical image in an abnormal state used in the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
도 5는 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 방법의 순서를 도시하는 흐름도이다. 5 is a flowchart illustrating a procedure of a GAN-based medical image learning method according to an embodiment of the present disclosure.
도 6은 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 방법 및 장치를 실행하는 컴퓨팅 시스템을 예시하는 블록도이다. 6 is a block diagram illustrating a computing system that executes a GAN-based medical image learning method and apparatus according to an embodiment of the present disclosure.
이하에서는 첨부한 도면을 참고로 하여 본 개시의 실시예에 대하여 본 개시가 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 그러나, 본 개시는 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those of ordinary skill in the art may easily implement the embodiments. However, the present disclosure may be implemented in various different forms and is not limited to the embodiments described herein.
본 개시의 실시예를 설명함에 있어서 공지 구성 또는 기능에 대한 구체적인 설명이 본 개시의 요지를 흐릴 수 있다고 판단되는 경우에는 그에 대한 상세한 설명은 생략한다. 그리고, 도면에서 본 개시에 대한 설명과 관계없는 부분은 생략하였으며, 유사한 부분에 대해서는 유사한 도면 부호를 붙였다.In describing an embodiment of the present disclosure, when it is determined that a detailed description of a known configuration or function may obscure the subject matter of the present disclosure, a detailed description thereof will be omitted. In addition, parts not related to the description of the present disclosure in the drawings are omitted, and similar reference numerals are attached to similar parts.
본 개시에 있어서, 어떤 구성요소가 다른 구성요소와 "연결", "결합" 또는 "접속"되어 있다고 할 때, 이는 직접적인 연결관계뿐만 아니라, 그 중간에 또 다른 구성요소가 존재하는 간접적인 연결관계도 포함할 수 있다. 또한 어떤 구성요소가 다른 구성요소를 "포함한다" 또는 "가진다"고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 배제하는 것이 아니라 또 다른 구성요소를 더 포함할 수 있는 것을 의미한다.In the present disclosure, when a component is said to be "connected", "coupled" or "connected" with another component, it is not only a direct connection relationship, but also an indirect connection relationship in which another component exists in the middle. It can also include. In addition, when a certain component "includes" or "have" another component, it means that other components may be further included rather than excluding other components unless otherwise stated. .
본 개시에 있어서, 제1, 제2 등의 용어는 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용되며, 특별히 언급되지 않는 한 구성요소들간의 순서 또는 중요도 등을 한정하지 않는다. 따라서, 본 개시의 범위 내에서 일 실시예에서의 제1 구성요소는 다른 실시예에서 제2 구성요소라고 칭할 수도 있고, 마찬가지로 일 실시예에서의 제2 구성요소를 다른 실시예에서 제1 구성요소라고 칭할 수도 있다. In the present disclosure, terms such as first and second are used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise noted. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is referred to as a first component in another embodiment. It can also be called.
본 개시에 있어서, 서로 구별되는 구성요소들은 각각의 특징을 명확하게 설명하기 위함이며, 구성요소들이 반드시 분리되는 것을 의미하지는 않는다. 즉, 복수의 구성요소가 통합되어 하나의 하드웨어 또는 소프트웨어 단위로 이루어질 수도 있고, 하나의 구성요소가 분산되어 복수의 하드웨어 또는 소프트웨어 단위로 이루어질 수도 있다. 따라서, 별도로 언급하지 않더라도 이와 같이 통합된 또는 분산된 실시예도 본 개시의 범위에 포함된다. In the present disclosure, components that are distinguished from each other are intended to clearly describe each feature, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated into one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Therefore, even if not stated otherwise, such integrated or distributed embodiments are also included in the scope of the present disclosure.
본 개시에 있어서, 다양한 실시예에서 설명하는 구성요소들이 반드시 필수적인 구성요소들은 의미하는 것은 아니며, 일부는 선택적인 구성요소일 수 있다. 따라서, 일 실시예에서 설명하는 구성요소들의 부분집합으로 구성되는 실시예도 본 개시의 범위에 포함된다. 또한, 다양한 실시예에서 설명하는 구성요소들에 추가적으로 다른 구성요소를 포함하는 실시예도 본 개시의 범위에 포함된다. In the present disclosure, components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other elements in addition to the elements described in the various embodiments are included in the scope of the present disclosure.
이하, 첨부한 도면을 참조하여 본 개시의 실시예들에 대해서 설명한다.Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings.
도 1은 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 장치의 구성을 나타내는 블록도이다.1 is a block diagram showing the configuration of a GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
도 1을 참조하면, GAN 기반의 의료영상 학습 장치는 병변 학습부(10) 및 학습 데이터 셋 관리부(15)를 포함할 수 있다.Referring to FIG. 1, the GAN-based medical image learning apparatus may include a lesion learning unit 10 and a learning data set management unit 15.
병변 학습부(10)는 병변 학습모델(110)에 대한 학습을 처리하는 구성부로서, 학습 데이터 셋 관리부(15)에서 제공하는 학습 데이터 셋을 사용하여 병변 학습모델(110)에 대한 학습을 처리한다.The lesion learning unit 10 is a component that processes learning about the lesion learning model 110, and processes learning about the lesion learning model 110 using the learning data set provided by the learning data set management unit 15. do.
특히, 병변 학습모델(110)은 GAN 기반의 준 지도학습을 통해 구축된 학습 모델일 수 있으며, 제너레이터(Generator)와 디스크리미네이터(Discriminator)의 조합으로 구성될 수 있다. 이에 대응하여, 병변 학습부(10)는 제너레이터의 학습을 관리하는 제너레이터 학습부(11)와 디스크리미네이터의 학습을 관리하는 디스크리미네이터 학습부(12)를 포함할 수 있다.In particular, the lesion learning model 110 may be a learning model built through GAN-based quasi-supervised learning, and may be composed of a combination of a generator and a discriminator. Correspondingly, the lesion learning unit 10 may include a generator learning unit 11 that manages learning of the generator and a discreminator learning unit 12 that manages the learning of the discreminator.
제너레이터 학습부(11)는 제너레이터가 의료영상을 생성할 수 있도록 학습을 수행할 수 있다. 그리고, 디스크리미네이터 학습부(12)는 제너레이터에 의해 생성된 의료영상(이하, '제2의료영상' 이라 함)과, 학습 데이터 셋 관리부(15)에서 제공하는 의료영상(이하, '제1의료영상' 이라함) 및 라벨링 데이터를 입력받고, 입력받은 영상 중에서 라벨링 데이터를 결정할 수 있도록 학습될 수 있다.The generator learning unit 11 may perform learning so that the generator can generate a medical image. In addition, the discreminator learning unit 12 includes a medical image generated by the generator (hereinafter, referred to as a'second medical image') and a medical image provided by the learning data set management unit 15 (hereinafter, referred to as a'first medical image'). Medical images') and labeling data may be input, and the learning may be performed to determine labeling data among the received images.
나아가, 병변 학습모델(110)은 병변영역 검출 학습모델(111) 및 병변진단 학습모델(113)을 포함할 수 있다. 병변영역 검출 학습모델(111)은 의료영상을 입력받고, 의료영상에서 병변이 발생되는 영역을 검출하여 출력하는 학습모델일 수 있다. 그리고, 병변진단 학습모델(113)은 병변영역 검출 학습모델(111)에서 제공되는 병변이 발생되는 영역을 입력받고, 질환의 종류, 질환의 중증도 등을 결정하여 출력하는 학습모델일 수 있다.Furthermore, the lesion learning model 110 may include a lesion region detection learning model 111 and a lesion diagnosis learning model 113. The lesion region detection learning model 111 may be a learning model that receives a medical image and detects and outputs a lesion region in the medical image. In addition, the lesion diagnosis learning model 113 may be a learning model that receives a region in which a lesion is generated provided from the lesion region detection learning model 111 and determines and outputs a disease type and disease severity.
전술한, 병변 학습모델(110)의 구조에 기초하여, 제너레이터 학습부(11)는 병변영역 검출 학습모델(111)에 구비된 제너레이터가 제2의료영상을 생성할 수 있도록 학습을 수행할 수 있으며, 디스크리미네이터 학습부(12)는 디스크리미네이터가 제1의료영상, 제2의료영상, 및 라벨링 데이터를 입력받고, 라벨링 데이터를 식별할 수 있도록 학습을 수행할 수 있다. 이때, 라벨링 데이터는 제1의료영상에서 병변이 발생되는 영역을 검출한 영상일 수 있다.Based on the structure of the lesion learning model 110 described above, the generator learning unit 11 may perform learning so that the generator provided in the lesion region detection learning model 111 can generate a second medical image. , The disc limiter learning unit 12 may perform learning so that the disc limiter receives the first medical image, the second medical image, and labeling data, and identifies the labeling data. In this case, the labeling data may be an image in which a region in which a lesion occurs is detected in the first medical image.
또한, 제너레이터 학습부(11)는 병변진단 학습모델(113)에 구비된 제너레이터가 병변이 발생되는 영역의 영상을 생성할 수 있도록 학습을 수행할 수 있으며, 디스크리미네이터 학습부(12)는 디스크리미네이터가 병변이 발생되는 영역의 영상(예, 제너레이터가 제공하는 병변이 발생되는 영역의 영상, 학습 데이터 셋에서 제공되는 병변이 발생되는 영역의 영상 등)과, 및 라벨링 데이터를 입력받고, 라벨링 데이터를 식별할 수 있도록 학습을 수행할 수 있다. 이때, 라벨링 데이터는 병변이 발생되는 영역의 영상을 기반으로 질환의 종류, 질환의 중증도 등을 검출한 영상일 수 있다.In addition, the generator learning unit 11 may perform learning so that the generator provided in the lesion diagnosis learning model 113 can generate an image of an area where a lesion occurs, and the discreminator learning unit 12 The reminator receives the image of the area where the lesion occurs (e.g., the image of the area where the lesion is generated provided by the generator, the image of the area where the lesion is generated provided from the training data set, etc.), and labeling data. Learning can be performed to identify the data. In this case, the labeling data may be an image in which the type of disease and the severity of the disease are detected based on the image of the area where the lesion occurs.
나아가, 제너레이터 학습부(11)는 제너레이터가 제공하는 병변이 발생되는 영역의 영상을 제1 및 제2의료영상의 크기 및 해상도에 맞춰 리사이즈한 후, 리사이징된 영상을 디스크리미네이터에 제공하여 학습을 처리할 수 있다.Furthermore, the generator learning unit 11 resizes the image of the lesion-prone area provided by the generator according to the size and resolution of the first and second medical images, and then provides the resized image to the discreminator for learning. You can handle it.
전술한 실시예에서, 병변 학습모델(110)의 학습을 수행하면서, 제너레이터는 소정의 영상을 구성하여, 디스크리미네이터의 학습에 사용하는 것을 예시하였다. 그러나, 제너레이터가 생성한 영상을 모두 디스크리미네이터에서 사용할 경우, 제너레이터의 성능 수준에 의해 디스크리미네이터의 성능에 영향을 미치게 되는 문제가 발생될 수 있다. 따라서, 본 개시의 일 실시예에 따른 병변 학습 장치는, 제너레이터가 생성한 영상을 선별적으로 디스크리미네이터의 학습에 사용할 수 있도록 구성하는 것이 바람직하다. 이를 위해, 제너레이터 학습부(11)는 제너레이터가 생성한 제2의료영상을 모두 디스크리미네이터로 전달하지 않고, 학습 데이터 셋 관리부(15)로 전달한 후, 학습 데이터 셋 관리부(15)가 제2의료영상을 선별하여 디스크리미네이터에 제공하도록 구성될 수 있다.In the above-described embodiment, while performing the learning of the lesion learning model 110, the generator constructs a predetermined image and uses it for learning of the discreminator. However, when all the images generated by the generator are used in the discreminator, a problem may occur in that the performance of the discreminator is affected by the performance level of the generator. Accordingly, it is preferable that the lesion learning apparatus according to an embodiment of the present disclosure is configured to selectively use the image generated by the generator for learning of the discreminator. To this end, the generator learning unit 11 does not deliver all of the second medical images generated by the generator to the discreminator, but delivers them to the learning data set management unit 15, and then the learning data set management unit 15 provides It may be configured to select an image and provide it to the discreminator.
구체적으로, 제너레이터 학습부(11)는 병변 학습모델(110)의 학습과정에서 생성된 제2의료영상을 학습 데이터 셋 관리부(15)에 제공한다. 이에 대응하여, 학습 데이터 셋 관리부(15)는 제2의료영상을 선별하는 제2의료영상 필터링부(16)를 포함할 수 있다. Specifically, the generator learning unit 11 provides the second medical image generated in the learning process of the lesion learning model 110 to the learning data set management unit 15. In response to this, the learning data set management unit 15 may include a second medical image filtering unit 16 that selects a second medical image.
바람직하게, 학습 데이터 셋 관리부(15)는 제2의료영상의 히스토그램을 확인하는 히스토그램 확인부(16)를 포함할 수 있는데, 히스토그램 확인부(16)는 제너레이터 학습부(11)가 제공하는 제2의료영상에 대한 히스토그램을 확인하여 제2의료영상 필터링부(16)에 제공할 수 있다. 또한, 히스토그램 확인부(16)는 학습 데이터 셋(150)에 포함된 제1의료영상(151)에 대한 히스토그램을 확인하고, 그 결과를 제2의료영상 필터링부(16)에 제공할 수 있다.Preferably, the training data set management unit 15 may include a histogram verification unit 16 that checks the histogram of the second medical image, and the histogram verification unit 16 is a second generation unit provided by the generator learning unit 11. The histogram of the medical image may be checked and provided to the second medical image filtering unit 16. In addition, the histogram checking unit 16 may check the histogram of the first medical image 151 included in the training data set 150 and provide the result to the second medical image filtering unit 16.
전술한 바에 기초하여, 제2의료영상 필터링부(16)는 제1의료영상(151)에 대한 히스토그램과, 제2의료영상에 대한 히스토그램을 비교하여, 제1의료영상에 대응되는 영상만을 필터링할 수 있다. 예를 들어, 제2의료영상 필터링부(16)는 제1의료영상(151)에 대한 히스토그램을 사용하여 적어도 하나의 기준 히스토그램 정보를 구성하고, 적어도 하나의 기준 히스토그램 정보와 제2의료영상에 대한 히스토그램을 비교하여, 적어도 하나의 기준 히스토그램 정보를 초과하는 제2의료영상을 필터링된 영상으로 결정할 수 있다. 이후, 제2의료영상 필터링부(16)는 결정된 영상, 즉 필터링된 영상을 제공할 수 있으며, 학습 데이터 셋 관리부(15)는 이를 필터링된 영상(153)으로서 학습 데이터 셋(150)에 저장할 수 있다.Based on the foregoing, the second medical image filtering unit 16 compares the histogram for the first medical image 151 and the histogram for the second medical image to filter only the image corresponding to the first medical image. I can. For example, the second medical image filtering unit 16 constructs at least one reference histogram information using the histogram for the first medical image 151, and includes at least one reference histogram information and the second medical image. By comparing the histograms, a second medical image exceeding at least one reference histogram information may be determined as the filtered image. Thereafter, the second medical image filtering unit 16 may provide the determined image, that is, the filtered image, and the learning data set management unit 15 may store this as the filtered image 153 in the learning data set 150. have.
전술한 구조에 기초하여, 학습 데이터 셋(150)에는, 제1의료영상(151), 라벨링 데이터(152), 및 필터링된 영상(153)이 수록될 수 있으며, 디스크리미네이터 학습부(12)는 학습 데이터 셋(150)에 포함된 제1의료영상(151), 라벨링 데이터(152), 및 필터링된 영상(153) 등을 사용하여 병변 학습모델(110)의 디스크리미네이터 학습을 수행할 수 있다. Based on the above-described structure, the learning data set 150 may include the first medical image 151, the labeling data 152, and the filtered image 153, and the discreminator learning unit 12 May perform discreminator learning of the lesion learning model 110 using the first medical image 151, the labeling data 152, and the filtered image 153 included in the training data set 150. have.
한편, 제1의료영상을 수집하는 과정, 예컨대, 의료영상의 촬영, 전송, 저장 등의 과정에서 데이터 소실로 인하여 제1의료영상에 오류가 발생될 수 있다. 제1의료영상은 학습에 사용되는 중요한 데이터이므로, 오류가 발생될 경우, 병변 학습모델(110)의 학습이 정확하게 수행되지 않는 문제가 발생될 수 있다. 이를 고려하여, 학습 데이터 셋 관리부(15)는 제1의료영상의 영상의 오류여부를 확인 및 관리하는 오류영상 관리부(18)를 더 포함할 수 있다.Meanwhile, an error may occur in the first medical image due to data loss in a process of collecting the first medical image, for example, in the process of photographing, transmitting, and storing the medical image. Since the first medical image is important data used for learning, when an error occurs, there may be a problem in that the learning of the lesion learning model 110 is not accurately performed. In consideration of this, the learning data set management unit 15 may further include an error image management unit 18 that checks and manages whether or not an image of the first medical image has an error.
오류영상 관리부(18)는 제1의료영상에 오류가 존재하는지 여부를 확인하는 오류영상 확인부(18a) 및 오류영상을 복원하는 오류영상 복원부(18b)를 포함할 수 있다.The error image management unit 18 may include an error image check unit 18a for checking whether an error exists in the first medical image, and an error image restoration unit 18b for restoring the error image.
오류영상 확인부(18a)는 제1의료영상을 분석하여 오류여부를 확인한다. 예컨대, 오류영상 확인부(18a)는 제1의료영상(151)에 대한 히스토그램을 확인하고, 미리 정해진 기준과 비교하여 제1의료영상(151)에 대한 오류 여부를 결정할 수 있다. The error image verification unit 18a analyzes the first medical image and checks whether or not there is an error. For example, the error image verification unit 18a may check the histogram of the first medical image 151 and determine whether or not there is an error in the first medical image 151 by comparing it with a predetermined criterion.
오류영상 복원부(18b)는 제2의료영상을 오류가 발생된 제1의료영상(151)을 대체하는 영상으로 복원할 수 있다. 예를 들어, 오류영상 복원부(18b)는 제2의료영상의 히스토그램을 오류가 발생된 제1의료영상(151)의 히스토그램과 비교하여, 유사성이 높은 제2의료영상을 검출할 수 있다. 오류영상 복원부(18b)는 이와 같이 검출된 제2의료영상을 오류가 발생된 제1의료영상(151)으로 복원할 수 있다.The error image restoration unit 18b may restore the second medical image to an image that replaces the first medical image 151 in which an error has occurred. For example, the error image restoration unit 18b may detect a second medical image having high similarity by comparing the histogram of the second medical image with the histogram of the first medical image 151 in which an error has occurred. The error image restoration unit 18b may restore the detected second medical image to the first medical image 151 in which an error has occurred.
한편, 라벨링 데이터가 현저하게 적을 경우, 병변 학습모델(110)의 학습이 제대로 이루어지지 않을 수 있으므로, 일정 수 이상의 라벨링 데이터가 필요하다. 이를 위해, 학습 데이터 셋 관리부(15)는 학습 데이터 셋(150)에 포함된 라벨링 데이터의 수를 확인하고, 미리 정해진 수 또는 비율보다 적게 포함되어 있을 경우, 라벨링 데이터를 구성 및 저장할 수 있다. 예컨대, 학습 데이터 셋 관리부(15)는 제1의료영상(151)에 대응되는 라벨링 데이터(152)을 학습하는 학습모델(미 도시)을 구비할 수 있다. 여기서, 상기 학습모델(미 도시)은 지도 학습 기반으로 구성될 수 있다. 그리고, 학습 데이터 셋 관리부(15)는 제1의료영상(151)을 상기 학습모델(미 도시)에 입력하여, 라벨링 데이터를 생성할 수 있는데, 이와 같은 방식으로 생성된 라벨링 데이터는 정확도가 상대적으로 떨어질 수 있다. 이를 고려하여, 학습 데이터 셋 관리부(15)는 학습모델(미 도시)이 출력하는 라벨링 데이터의 확률값을 확인하고, 상대적으로 높은 확률값을 구비하는 라벨링 데이터를 선별하여, 학습 데이터 셋(150)의 라벨링 데이터(152)로서 저장할 수 있다.On the other hand, when the labeling data is remarkably small, since the lesion learning model 110 may not be properly trained, a certain number of labeling data or more is required. To this end, the learning data set management unit 15 may check the number of labeling data included in the learning data set 150 and, if less than a predetermined number or ratio, configure and store the labeling data. For example, the learning data set management unit 15 may include a learning model (not shown) for learning labeling data 152 corresponding to the first medical image 151. Here, the learning model (not shown) may be configured based on supervised learning. In addition, the learning data set management unit 15 can generate labeling data by inputting the first medical image 151 into the learning model (not shown), and the labeling data generated in this way has a relatively high accuracy. It can fall. In consideration of this, the learning data set management unit 15 checks the probability value of the labeling data output by the learning model (not shown), selects labeling data having a relatively high probability value, and labels the learning data set 150 It can be stored as data 152.
도 2a 및 도 2b는 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 장치에 의해 병변 학습모델을 학습하는 동작을 설명하는 도면이다. 2A and 2B are diagrams illustrating an operation of learning a lesion learning model by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
우선, 도 2a는 GAN 기반의 의료영상 학습 장치에 의해 병변영역 검출 학습모델(111)을 학습하는 동작을 나타낸다.First, FIG. 2A shows an operation of learning a lesion region detection learning model 111 by a GAN-based medical image learning apparatus.
우선, 디스크리미네이터(21)는 병변영역에 대한 학습을 선행적으로 수행하여, 병변영역에 대한 특성을 학습한다. 디스크리미네이터(21)가 병변영역에 대한 학습을 선행적으로 수행한 상태에서, 제너레이터(22)는 제2의료영상(202)을 생성할 수 있다. 이렇게 생성된 제2의료영상(202)은 학습 데이터 셋 관리부(23)에 제공될 수 있으며, 학습 데이터 셋 관리부(23)는 제2의료영상(202)의 히스토그램을 확인하고, 이를 기반으로 제2의료영상(202)에 대한 필터링을 수행할 수 있다. 그리고, 학습 데이터 셋 관리부(23)는 필터링된 제2의료영상(203)을 학습 데이터 셋에 저장 및 관리할 수 있다. 이 후, 학습 데이터 셋 관리부(23)는 제1의료영상(201), 제2의료영상(203), 및 제1라벨링 데이터(205)를 디스크리미네이터(21)에 제공할 수 있으며, 디스크리미네이터(21)는 제공된 데이터(201, 203, 205) 중, 제1라벨링 데이터(205)를 분류하도록 학습될 수 있다. 이때, 제1라벨링 데이터(205)는 의료영상으로부터 병변영역을 추출한 데이터일 수 있다. First, the discreminator 21 learns the characteristics of the lesion area by proactively learning the lesion area. In a state in which the discreminator 21 has previously learned about the lesion area, the generator 22 may generate the second medical image 202. The second medical image 202 generated in this way may be provided to the learning data set management unit 23, and the learning data set management unit 23 checks the histogram of the second medical image 202, and based on this, the second medical image 202 Filtering on the medical image 202 may be performed. In addition, the learning data set management unit 23 may store and manage the filtered second medical image 203 in the learning data set. Thereafter, the learning data set management unit 23 may provide the first medical image 201, the second medical image 203, and the first labeling data 205 to the disc limiter 21, and the disc limiter The navigator 21 may be trained to classify the first labeling data 205 among the provided data 201, 203, and 205. In this case, the first labeling data 205 may be data obtained by extracting a lesion area from a medical image.
이와 같이, 병변영역 검출 학습모델(111)은 디스크리미네이터(21) 및 제너레이터(22)를 통해 GAN 기반의 비지도 학습모델을 구축할 수 있으며, 의료영상의 입력시 이에 대응되는 병변 영역을 검출하여 출력할 수 있다.In this way, the lesion region detection learning model 111 can construct a GAN-based unsupervised learning model through the discreminator 21 and the generator 22, and detects a lesion region corresponding thereto when a medical image is input. Can be printed.
나아가, 병변 학습부(10)는 병변영역 검출 학습모델(111)의 학습 시, 제1라벨링 데이터(205), 즉, 의료영상으로부터 병변영역을 추출한 데이터를 병변진단 학습모델(113)에 제공하여, 병변영역 검출 학습모델(111)과 병변진단 학습모델(113)의 학습을 연동시키도록 구성할 수 있다.Furthermore, the lesion learning unit 10 provides the first labeling data 205, that is, data extracted from the medical image, to the lesion diagnosis learning model 113 when learning the lesion area detection learning model 111. , It can be configured to link the learning of the lesion region detection learning model 111 and the lesion diagnosis learning model 113.
한편, 도 2b를 참조하면, 디스크리미네이터(21)는 질환 종류, 질환 중증도 등에 대한 학습을 선행적으로 수행하여, 질환 종류, 질환 중증도 등에 대한 특성을 학습한다. 디스크리미네이터(21)가 질환 종류, 질환 중증도 등에 대한 학습을 선행적으로 수행한 상태에서, 제너레이터(22)는 제4의료영상(212)을 생성할 수 있다. 이때, 제너레이터(22)가 생성하는 제4의료영상(212)은 병변 영역이 추출된 영상일 수 있다. Meanwhile, referring to FIG. 2B, the discreminator 21 learns characteristics of the disease type and disease severity by proactively learning about the type of disease and the severity of the disease. In a state in which the discreminator 21 has previously learned about the type of disease, the severity of the disease, and the like, the generator 22 may generate the fourth medical image 212. In this case, the fourth medical image 212 generated by the generator 22 may be an image from which a lesion area is extracted.
전술한 동작을 통해 생성된 제4의료영상(212)은 학습 데이터 셋 관리부(23)에 제공될 수 있으며, 학습 데이터 셋 관리부(23)는 제4의료영상(212)의 히스토그램을 확인하고, 이를 기반으로 제4의료영상(212)에 대한 필터링을 수행할 수 있다. 그리고, 학습 데이터 셋 관리부(23)는 필터링된 제4의료영상(213)을 학습 데이터 셋에 저장 및 관리할 수 있다. 이 후, 학습 데이터 셋 관리부(23)는 제3의료영상(211), 제4의료영상(213), 및 제2라벨링 데이터(215)를 디스크리미네이터(21)에 제공할 수 있으며, 디스크리미네이터(21)는 제공된 데이터(211, 213, 215) 중, 제2라벨링 데이터(215)를 분류하도록 학습될 수 있다. 이때, 제2라벨링 데이터(215)는 병변영역의 영상으로부터 질환 종류, 질환 중증도 등을 추출한 데이터일 수 있다. The fourth medical image 212 generated through the above-described operation may be provided to the learning data set management unit 23, and the learning data set management unit 23 checks the histogram of the fourth medical image 212, and Based on the filtering, the fourth medical image 212 may be filtered. In addition, the learning data set management unit 23 may store and manage the filtered fourth medical image 213 in the learning data set. After that, the learning data set management unit 23 may provide the third medical image 211, the fourth medical image 213, and the second labeling data 215 to the disc limiter 21, and The navigator 21 may be trained to classify the second labeling data 215 among the provided data 211, 213, and 215. In this case, the second labeling data 215 may be data obtained by extracting a disease type and a disease severity from an image of a lesion area.
나아가, 병변영역 검출 학습모델(111)과 병변진단 학습모델(113)의 학습을 연동시키기 위하여, 제3의료영상(211)을 대신하여, 병변영역 검출 학습모델(111)에서 제공되는 병변영역(217)이 병변진단 학습모델(113)의 입력으로 사용될 수 있다. Further, in order to link the learning of the lesion region detection learning model 111 and the lesion diagnosis learning model 113, instead of the third medical image 211, the lesion region provided by the lesion region detection learning model 111 ( 217) may be used as an input to the lesion diagnosis learning model 113.
이와 같이, 병변진단 학습모델(113)은 디스크리미네이터(21) 및 제너레이터(22)를 통해 GAN 기반의 비지도 학습모델을 구축할 수 있으며, 병변영역의 영상의 입력시 이에 대응되는 질환 종류, 질환 중증도 등을 검출하여 출력할 수 있다.In this way, the lesion diagnosis learning model 113 can construct a GAN-based unsupervised learning model through the discreminator 21 and the generator 22, and the disease type corresponding thereto when an image of the lesion area is input, Disease severity, etc. can be detected and output.
도 3은 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 장치에 의해 오류영상을 복원하는 동작을 설명하는 도면이다. 3 is a diagram illustrating an operation of restoring an error image by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
학습 데이터 셋 관리부(15)는 학습 데이터 셋의 구성시, 제1의료영상(151)의 오류여부를 확인하기 위하여, 오류영상 확인부(18a)에 제1의료영상(151)의 오류 확인을 요청할 수 있다. 이에 대응하여, 오류영상 확인부(18a)는 제1의료영상(151)의 히스토그램 정보를 확인하고(301), 확인된 히스토그램 정보를 기반으로 제1의료영상(151)의 오류 여부를 확인할 수 있다(302). When configuring the training data set, the learning data set management unit 15 requests the error image verification unit 18a to check the error of the first medical image 151 in order to check whether the first medical image 151 has an error. I can. Correspondingly, the error image verification unit 18a may check the histogram information of the first medical image 151 (301), and check whether the first medical image 151 is in error based on the confirmed histogram information. (302).
도 4a에 예시되는 바와 같이, 정상적인 상태의 의료영상(410)은 히스토그램이 고르게 분포될 수 있으나, 의료 영상의 생성, 저장, 또는 전송 과정에서 오류가 발생될 경우, 일부 영역에 대한 영상정보가 소실될 수 있다. 이에 따라, 소실된 정보를 사용하여 의료영상(420)을 구성할 경우, 소실된 정보가 영상정보로 복원되지 못하므로, 해당 영역(425)은 비 정상적인 영역으로 구성될 수 있다. 이와 같이, 비 정상적인 영역(425)을 포함하여 의료영상이 구성될 경우, 비 정상적인 영역(425)을 포함하는 의료영상(420)의 히스토그램은 색상이 고루 분포되지 않고, 특정 색상에 대한 분포가 집중될 수 있다. As illustrated in FIG. 4A, the histogram of the medical image 410 in a normal state may be evenly distributed, but when an error occurs in the generation, storage, or transmission of the medical image, image information for a part of the area is lost. Can be. Accordingly, when the medical image 420 is configured using the lost information, the lost information cannot be restored to the image information, so the corresponding area 425 may be configured as an abnormal area. In this way, when a medical image including the abnormal region 425 is configured, the histogram of the medical image 420 including the abnormal region 425 is not evenly distributed, and the distribution for a specific color is concentrated. Can be.
전술한 바를 고려하여, 오류영상 확인부(18a)는 제1의료영상(151)의 히스토그램 정보를 확인하고, 해당 히스토그램 정보가 특정 색상에 집중되는 분포를 나타낼 경우, 즉, 특정 색상의 히스토그램이 미리 정해진 임계값을 초과할 경우, 해당 의료영상은 오류가 존재하는 것으로 결정할 수 있다.In consideration of the above, the error image verification unit 18a checks the histogram information of the first medical image 151, and when the corresponding histogram information indicates a distribution concentrated on a specific color, that is, the histogram of a specific color is If it exceeds the predetermined threshold, it may be determined that an error exists in the medical image.
오류영상 복원부(18b)는 병변 학습부(10)의 학습 과정에서 생성되는 제2의료영상을 오류가 존재하는 제1의료영상으로 대체하도록 구성될 수 있다. 구체적으로, 제1의료영상과 제2의료영상의 히스토그램의 유사도를 산출하고(303), 산출된 유사도가 미리 정해진 임계값을 초과할 경우, 해당 제2의료영상을 제1의료영상으로 대체할 복원영상으로 결정할 수 있다(304). The error image restoration unit 18b may be configured to replace the second medical image generated in the learning process of the lesion learning unit 10 with a first medical image in which an error exists. Specifically, the similarity of the histogram of the first medical image and the second medical image is calculated (303), and when the calculated similarity exceeds a predetermined threshold, the second medical image is restored to be replaced with the first medical image. It can be determined by the image (304).
다른 예로서, 오류영상 복원부(18b)는 제1의료영상의 미리 정해진 라인 영역 단위로 수평 방향 및 수직 방향의 히스토그램을 산출하고, 특정 색상의 히스토그램이 미리 정해진 임계값을 초과하는 라인 영역을 검출할 수 있다. 이에 기초하여, 오류영상 복원부(18b)는 수평 방향 및 수직 방향에 대한 오류영역을 검출할고, 오류 영역과 비 오류 영역을 구분할 수 있다. 이후, 오류영상 복원부(18b)는 비 오류 영역을 기준으로 제1의료영상과 제2의료영상에 대한 히스토그램을 확인하여, 히스토그램의 유사도를 산출하고, 산출된 유사도가 미리 정해진 임계값을 초과하는 제2의료영상을 추출한 후, 이를 제1의료영상으로 대체할 수도 있다. As another example, the error image restoration unit 18b calculates a histogram in the horizontal and vertical directions in units of a predetermined line area of the first medical image, and detects a line area in which the histogram of a specific color exceeds a predetermined threshold. can do. Based on this, the error image restoration unit 18b may detect an error area in a horizontal direction and a vertical direction, and may distinguish an error area from a non-error area. Thereafter, the error image restoration unit 18b checks the histogram of the first medical image and the second medical image based on the non-error region, calculates the similarity of the histogram, and the calculated similarity exceeds a predetermined threshold. After extracting the second medical image, it may be replaced with the first medical image.
비록, 본 개시의 실시예에서, 제1 및 제2의료영상을 사용하여 오류영상의 확인 및 복원영상의 결정 동작을 예시하였으나, 본 개시가 이를 한정하는 것은 아니며, 다양하게 변경 또는 적용될 수 있다. 예컨대, 오류영상의 확인 및 복원영상의 결정 동작은 제3 및 제4의료영상에 적용될 수도 있다. Although, in the embodiment of the present disclosure, the operation of confirming an error image and determining a reconstructed image using the first and second medical images is exemplified, the present disclosure is not limited thereto, and may be variously changed or applied. For example, the operation of checking the error image and determining the reconstructed image may be applied to the third and fourth medical images.
도 5는 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 방법의 순서를 도시하는 흐름도이다. 5 is a flowchart illustrating a procedure of a GAN-based medical image learning method according to an embodiment of the present disclosure.
GAN 기반의 의료영상 학습 방법은 전술한 GAN 기반의 의료영상 학습 장치에 의해 수행될 수 있다. The GAN-based medical image learning method may be performed by the GAN-based medical image learning apparatus described above.
우선, S510 단계에서, 의료영상 학습 장치는 학습 데이터 셋을 구성 및 저장할 수 있다. 학습 데이터 셋은 병변 학습모델의 학습을 위한 데이터 셋으로서, 의료영상, 및 라벨링 데이터를 포함할 수 있다. First, in step S510, the apparatus for learning a medical image may configure and store a learning data set. The training data set is a data set for learning a lesion learning model, and may include a medical image and labeling data.
나아가, 병변 학습모델은 병변영역 검출 학습모델 및 병변진단 학습모델을 포함할 수 있는데, 병변영역 검출 학습모델은 의료영상을 입력받고, 의료영상에서 병변이 발생되는 영역을 검출하여 출력하는 학습모델일 수 있다. 그리고, 병변진단 학습모델은 병변영역 검출 학습모델에서 제공되는 병변이 발생되는 영역을 입력받고, 질환의 종류, 질환의 중증도 등을 결정하여 출력하는 학습모델일 수 있다. 이에 기초하여, 의료영상 및 라벨링 데이터는 병변영역 검출 학습모델 및 병변진단 학습모델에 각각 제공되는 데이터가 구분되도록 구성될 수 있다. Furthermore, the lesion learning model may include a lesion region detection learning model and a lesion diagnosis learning model. The lesion region detection learning model is a learning model that receives a medical image and detects and outputs a lesion area in the medical image. I can. In addition, the lesion diagnosis learning model may be a learning model that receives a region in which a lesion is generated provided from a lesion region detection learning model, determines a type of disease, a severity of a disease, and the like, and outputs it. Based on this, the medical image and the labeling data may be configured such that data provided to the lesion region detection learning model and the lesion diagnosis learning model are classified.
예를 들어, 병변영역 검출 학습모델에 제공되는 제1의료영상은 환자의 신체를 촬영한 의료영상일 수 있으며, 제1라벨링 데이터는 제1의료영상에서 병변이 발생되는 영역을 검출한 영상일 수 있다. 또한, 병변진단 학습모델에 제공되는 의료영상은 병변이 발생되는 영역의 영상을 제1의료영상의 크기 및 해상도에 맞춰 리사이즈한 영상(이하, 제3의료영상이라 함)일 수 있으며, 제2라벨링 데이터는 병변이 발생되는 영역의 영상을 기반으로 질환의 종류, 질환의 중증도 등을 검출한 영상일 수 있다.For example, the first medical image provided to the lesion region detection learning model may be a medical image photographing a patient's body, and the first labeling data may be an image that detects an area where a lesion occurs in the first medical image. have. In addition, the medical image provided to the lesion diagnosis learning model may be an image (hereinafter referred to as a third medical image) in which the image of the lesion area is resized according to the size and resolution of the first medical image, and the second labeling The data may be an image in which the type of disease and the severity of the disease are detected based on the image of the area where the lesion occurs.
한편, 제1의료영상을 수집하는 과정, 예컨대, 의료영상의 촬영, 전송, 저장 등의 과정에서 데이터 소실로 인하여 제1의료영상(또는 제3의료영상)에 오류가 발생될 수 있다. 제1의료영상은 학습에 사용되는 중요한 데이터이므로, 오류가 발생될 경우, 병변 학습모델의 학습이 정확하게 수행되지 않는 문제가 발생될 수 있다. 이를 고려하여, 의료영상 학습 장치는 제1의료영상(또는 제3의료영상)의 오류여부를 확인 및 관리할 수 있다. 구체적으로, 의료영상 학습 장치는 제1의료영상(또는 제3의료영상)을 분석하여 오류여부를 확인한다. 예컨대, 의료영상 학습 장치는 제1의료영상(또는 제3의료영상)에 대한 히스토그램을 확인하고, 미리 정해진 기준과 비교하여 제1의료영상(또는 제3의료영상)에 대한 오류 여부를 결정할 수 있다. Meanwhile, an error may occur in the first medical image (or the third medical image) due to data loss in the process of collecting the first medical image, for example, in the process of capturing, transmitting, and storing the medical image. Since the first medical image is important data used for learning, when an error occurs, there may be a problem in that the learning of the lesion learning model is not accurately performed. In consideration of this, the apparatus for learning a medical image may check and manage whether the first medical image (or the third medical image) is an error. Specifically, the apparatus for learning a medical image checks whether or not there is an error by analyzing the first medical image (or the third medical image). For example, the medical image learning apparatus may check the histogram of the first medical image (or the third medical image) and determine whether or not there is an error in the first medical image (or the third medical image) by comparing it with a predetermined criterion. .
의료영상 학습 장치는 제2의료영상을 오류가 발생된 제1의료영상(또는 제3의료영상)을 대체하는 영상으로 복원할 수 있다. 예를 들어, 오류영상 의료영상 학습 장치는 제2의료영상(또는 제4의료영상)의 히스토그램을 오류가 발생된 제1의료영상(또는 제3의료영상)의 히스토그램과 비교하여, 유사성이 높은 제2의료영상(또는 제4의료영상)을 검출할 수 있다. 의료영상 학습 장치는 이와 같이 검출된 제2의료영상(또는 제4의료영상)을 오류가 발생된 제1의료영상(또는 제3의료영상)으로 복원할 수 있다.The apparatus for learning a medical image may restore the second medical image to an image that replaces the first medical image (or the third medical image) in which an error has occurred. For example, the error image medical image learning apparatus compares the histogram of the second medical image (or the fourth medical image) with the histogram of the first medical image (or the third medical image) in which the error has occurred, and has a high similarity. 2 A medical image (or a fourth medical image) can be detected. The apparatus for learning a medical image may restore the detected second medical image (or fourth medical image) to a first medical image (or a third medical image) in which an error has occurred.
한편, 라벨링 데이터가 현저하게 적을 경우, 병변 학습모델의 학습이 제대로 이루어지지 않을 수 있으므로, 일정 수 이상의 라벨링 데이터가 필요하다. 이를 위해, 의료영상 학습 장치는 학습 데이터 셋에 포함된 라벨링 데이터의 수를 확인하고, 미리 정해진 수 또는 비율보다 적게 포함되어 있을 경우, 라벨링 데이터를 구성 및 저장할 수 있다. 예컨대, 의료영상 학습 장치는 제1의료영상에 대응되는 라벨링 데이터를 학습하는 학습모델(미 도시)을 구비할 수 있다. 여기서, 상기 학습모델(미 도시)은 지도 학습 기반으로 구성될 수 있다. 그리고, 의료영상 학습 장치는 제1의료영상을 상기 학습모델(미 도시)에 입력하여, 라벨링 데이터를 생성할 수 있는데, 이와 같은 방식으로 생성된 라벨링 데이터는 정확도가 상대적으로 떨어질 수 있다. 이를 고려하여, 의료영상 학습 장치는 학습모델(미 도시)이 출력하는 라벨링 데이터의 확률값을 확인하고, 상대적으로 높은 확률값을 구비하는 라벨링 데이터를 선별하여, 학습 데이터 셋의 라벨링 데이터로서 저장할 수 있다.On the other hand, if the labeling data is remarkably small, since the lesion learning model may not be properly trained, a certain number of labeling data or more is required. To this end, the medical image learning apparatus may check the number of labeling data included in the training data set, and if it is included in less than a predetermined number or ratio, configure and store the labeling data. For example, the apparatus for learning medical images may include a learning model (not shown) for learning labeling data corresponding to a first medical image. Here, the learning model (not shown) may be configured based on supervised learning. In addition, the apparatus for learning medical images may generate labeling data by inputting the first medical image to the learning model (not shown), and the labeling data generated in this manner may be relatively inferior in accuracy. In consideration of this, the apparatus for learning a medical image may check a probability value of labeling data output by a learning model (not shown), select labeling data having a relatively high probability value, and store it as labeling data of a training data set.
S520 단계에서, 의료영상 학습 장치는 학습 데이터 셋을 사용하여 병변 학습모델에 대한 학습을 처리한다.In step S520, the apparatus for learning a medical image processes learning about a lesion learning model by using the learning data set.
특히, 병변 학습모델은 GAN 기반의 준 지도학습을 통해 구축된 학습 모델일 수 있으며, 제너레이터(Generator)와 디스크리미네이터(Discriminator)의 조합으로 구성될 수 있다. 이에 대응하여, 의료영상 학습 장치는 제너레이터와 디스크리미네이터의 학습을 각각 수행할 수 있다. 예를 들어, 의료영상 학습 장치는 제너레이터가 의료영상을 생성할 수 있도록 학습을 수행할 수 있다. 그리고, 의료영상 학습 장치는 디스크리미네이터의 학습을 수행할 수 있다. 구체적으로, 의료영상 학습 장치는 제너레이터에 의해 생성된 의료영상과, 학습 데이터 셋의 의료영상, 및 라벨링 데이터를 입력받고, 입력받은 영상 중에서 라벨링 데이터를 결정할 수 있도록 학습될 수 있다.In particular, the lesion learning model may be a learning model built through GAN-based quasi-supervised learning, and may be composed of a combination of a generator and a discriminator. In response to this, the apparatus for learning a medical image may perform learning of the generator and the discreminator, respectively. For example, the apparatus for learning a medical image may perform learning so that a generator can generate a medical image. In addition, the apparatus for learning a medical image may perform learning of the discreminator. Specifically, the apparatus for learning a medical image may be trained to receive a medical image generated by a generator, a medical image of a learning data set, and labeling data, and to determine labeling data from the received images.
나아가, 전술한 병변 학습모델의 구조에 기초하여, S521 단계에서, 의료영상 학습 장치는 병변영역 검출 학습모델에 구비된 제너레이터가 제2의료영상을 생성할 수 있도록 학습을 수행할 수 있으며, S522 단계에서, 디스크리미네이터가 제1의료영상, 제2의료영상, 및 제1라벨링 데이터를 입력받고, 제1라벨링 데이터를 식별할 수 있도록 학습을 수행할 수 있다. 이때, 제1라벨링 데이터는 제1의료영상에서 병변이 발생되는 영역을 검출한 영상일 수 있다.Further, based on the structure of the lesion learning model described above, in step S521, the medical image learning apparatus may perform learning so that the generator provided in the lesion region detection learning model generates a second medical image, and in step S522 At, the discreminator may receive the first medical image, the second medical image, and the first labeling data, and perform learning so that the first labeling data can be identified. In this case, the first labeling data may be an image obtained by detecting a region in which a lesion occurs in the first medical image.
또한, S523 단계에서, 의료영상 학습 장치는 병변진단 학습모델에 구비된 제너레이터가 병변이 발생되는 영역의 영상(이하, '제4의료영상'이라 함)을 생성할 수 있도록 학습을 수행할 수 있으며, S524 단계에서, 디스크리미네이터가 제4의료영상, 학습 데이터 셋에서 제공되는 병변이 발생되는 영역의 영상(즉, 제3의료영상), 및 제2라벨링 데이터를 입력받고, 제2라벨링 데이터를 식별할 수 있도록 학습을 수행할 수 있다. 이때, 제2라벨링 데이터는 병변이 발생되는 영역의 영상을 기반으로 질환의 종류, 질환의 중증도 등을 검출한 영상일 수 있다.In addition, in step S523, the medical image learning apparatus may perform learning so that the generator provided in the lesion diagnosis learning model can generate an image (hereinafter, referred to as a'fourth medical image') of the area where the lesion occurs. , In step S524, the discreminator receives the fourth medical image, the image (ie, the third medical image) of the lesion-prone area provided from the training data set, and the second labeling data, and receives the second labeling data. Learning can be performed so that you can identify it. In this case, the second labeling data may be an image in which the type of disease and the severity of the disease are detected based on the image of the area where the lesion occurs.
나아가, 의료영상 학습 장치는 제너레이터가 제공하는 병변이 발생되는 영역의 영상을 제1 및 제2의료영상의 크기 및 해상도에 맞춰 리사이즈한 후, 리사이징된 영상을 디스크리미네이터에 제공하여 학습을 처리할 수 있다.Further, the medical image learning apparatus resizes the image of the lesion-prone area provided by the generator according to the size and resolution of the first and second medical images, and then provides the resized image to the discreminator to process the learning. I can.
전술한 실시예에서, 병변 학습모델의 학습을 수행하면서, 제너레이터는 소정의 영상을 구성하여, 디스크리미네이터의 학습에 사용하는 것을 예시하였다. 그러나, 제너레이터가 생성한 영상을 모두 디스크리미네이터에서 사용할 경우, 제너레이터의 성능 수준에 의해 디스크리미네이터의 성능에 영향을 미치게 되는 문제가 발생될 수 있다. 따라서, 본 개시의 일 실시예에 따른 의료영상 학습 장치는, 제너레이터가 생성한 영상을 선별적으로 디스크리미네이터의 학습에 사용할 수 있도록 구성하는 것이 바람직하다. 이를 위해, 의료영상 학습 장치는 제너레이터가 생성한 제2의료영상(또는 제4의료영상)을 모두 디스크리미네이터로 전달하지 않고, 제2의료영상(또는 제4의료영상)을 선별적으로 디스크리미네이터에 제공하도록 구성될 수 있다.In the above-described embodiment, while performing the learning of the lesion learning model, it is exemplified that the generator constructs a predetermined image and uses it for learning of the discreminator. However, when all the images generated by the generator are used in the discreminator, a problem may occur in that the performance of the discreminator is affected by the performance level of the generator. Accordingly, it is preferable that the apparatus for learning a medical image according to an embodiment of the present disclosure is configured to selectively use the image generated by the generator for learning of the discreminator. To this end, the medical image learning apparatus does not deliver all the second medical images (or fourth medical images) generated by the generator to the discreminator, but selectively transmits the second medical images (or fourth medical images) to the disc limiter. It can be configured to provide to the nator.
구체적으로, S530 단계에서, 의료영상 학습 장치는 S520 단계에서 생성된 의료영상을 필터링할 수 있다. 구체적으로, 병변영역 검출 학습모델의 학습과정에서 생성된 제2의료영상을 확인하고, 제2의료영상을 선별하는 동작을 수행할 수 있다. 바람직하게, 의료영상 학습 장치는 제1의료영상에 대한 히스토그램 및 제2의료영상의 히스토그램을 확인하고, 이들을 비교하여, 제1의료영상에 대응되는 영상만을 필터링할 수 있다. 예를 들어, 의료영상 학습 장치는 제1의료영상에 대한 히스토그램을 사용하여 적어도 하나의 기준 히스토그램 정보를 구성하고, 적어도 하나의 기준 히스토그램 정보와 제2의료영상에 대한 히스토그램을 비교하여, 적어도 하나의 기준 히스토그램 정보를 초과하는 제2의료영상을 필터링된 영상으로 결정할 수 있다. 마찬가지로, 의료영상 학습 장치는 병변진단 학습모델의 학습과정에서 생성된 제4의료영상을 확인하고, 제4의료영상을 선별하는 동작을 수행할 수 있다. 바람직하게, 의료영상 학습 장치는 제3의료영상에 대한 히스토그램 및 제4의료영상의 히스토그램을 확인하고, 이들을 비교하여, 제3의료영상에 대응되는 영상만을 필터링할 수 있다. 예를 들어, 의료영상 학습 장치는 제3의료영상에 대한 히스토그램을 사용하여 적어도 하나의 기준 히스토그램 정보를 구성하고, 적어도 하나의 기준 히스토그램 정보와 제4의료영상에 대한 히스토그램을 비교하여, 적어도 하나의 기준 히스토그램 정보를 초과하는 제4의료영상을 필터링된 영상으로 결정할 수 있다. Specifically, in step S530, the apparatus for learning a medical image may filter the medical image generated in step S520. Specifically, an operation of checking the second medical image generated in the learning process of the lesion region detection learning model and selecting the second medical image may be performed. Preferably, the apparatus for learning a medical image may check the histogram of the first medical image and the histogram of the second medical image, compare them, and filter only the image corresponding to the first medical image. For example, the medical image learning apparatus configures at least one reference histogram information by using the histogram for the first medical image, compares the at least one reference histogram information with the histogram for the second medical image, The second medical image exceeding the reference histogram information may be determined as the filtered image. Similarly, the apparatus for learning a medical image may check the fourth medical image generated during the learning process of the lesion diagnosis learning model, and may perform an operation of selecting the fourth medical image. Preferably, the apparatus for learning a medical image may check the histogram of the third medical image and the histogram of the fourth medical image, compare them, and filter only the image corresponding to the third medical image. For example, the medical image learning apparatus configures at least one reference histogram information using the histogram for the third medical image, compares the at least one reference histogram information with the histogram for the fourth medical image, The fourth medical image exceeding the reference histogram information may be determined as the filtered image.
이후, 의료영상 학습 장치는 결정된 영상, 즉 필터링된 영상을 학습 데이터로서 구성하여 제공할 수 있다(S540). 이에 대응하여, 의료영상 학습 장치는 S510 단계를 진행하여 필터링된 영상을 학습 데이터 셋에 저장할 수 있다.Thereafter, the apparatus for learning a medical image may configure and provide the determined image, that is, the filtered image as learning data (S540). In response to this, the apparatus for learning a medical image may proceed to step S510 to store the filtered image in the learning data set.
전술한 구조에 기초하여, 학습 데이터 셋에는, 제1의료영상, 제3의료영상, 제1라벨링 데이터, 제2라벨링 데이터, 필터링된 제2의료영상, 필터링된 제4의료영상 등이 수록될 수 있으며, 의료영상 학습 장치는 학습 데이터 셋에 포함된 데이터를 사용하여 병변 학습모델의 디스크리미네이터 학습을 수행할 수 있다. Based on the above-described structure, the learning data set may include the first medical image, the third medical image, the first labeling data, the second labeling data, the filtered second medical image, and the filtered fourth medical image. In addition, the apparatus for learning a medical image may perform discreminator learning of a lesion learning model using data included in the training data set.
도 6은 본 개시의 일 실시예에 따른 GAN 기반의 의료영상 학습 방법 및 장치를 실행하는 컴퓨팅 시스템을 예시하는 블록도이다. 6 is a block diagram illustrating a computing system that executes a GAN-based medical image learning method and apparatus according to an embodiment of the present disclosure.
도 6을 참조하면, 컴퓨팅 시스템(1000)은 버스(1200)를 통해 연결되는 적어도 하나의 프로세서(1100), 메모리(1300), 사용자 인터페이스 입력 장치(1400), 사용자 인터페이스 출력 장치(1500), 스토리지(1600), 및 네트워크 인터페이스(1700)를 포함할 수 있다.Referring to FIG. 6, the computing system 1000 includes at least one processor 1100, a memory 1300, a user interface input device 1400, a user interface output device 1500, and storage connected through a bus 1200. (1600), and a network interface (1700).
프로세서(1100)는 중앙 처리 장치(CPU) 또는 메모리(1300) 및/또는 스토리지(1600)에 저장된 명령어들에 대한 처리를 실행하는 반도체 장치일 수 있다. 메모리(1300) 및 스토리지(1600)는 다양한 종류의 휘발성 또는 불휘발성 저장 매체를 포함할 수 있다. 예를 들어, 메모리(1300)는 ROM(Read Only Memory) 및 RAM(Random Access Memory)을 포함할 수 있다. The processor 1100 may be a central processing unit (CPU) or a semiconductor device that processes instructions stored in the memory 1300 and/or the storage 1600. The memory 1300 and the storage 1600 may include various types of volatile or nonvolatile storage media. For example, the memory 1300 may include read only memory (ROM) and random access memory (RAM).
따라서, 본 명세서에 개시된 실시예들과 관련하여 설명된 방법 또는 알고리즘의 단계는 프로세서(1100)에 의해 실행되는 하드웨어, 소프트웨어 모듈, 또는 그 2 개의 결합으로 직접 구현될 수 있다. 소프트웨어 모듈은 RAM 메모리, 플래시 메모리, ROM 메모리, EPROM 메모리, EEPROM 메모리, 레지스터, 하드 디스크, 착탈형 디스크, CD-ROM과 같은 저장 매체(즉, 메모리(1300) 및/또는 스토리지(1600))에 상주할 수도 있다. 예시적인 저장 매체는 프로세서(1100)에 커플링되며, 그 프로세서(1100)는 저장 매체로부터 정보를 판독할 수 있고 저장 매체에 정보를 기입할 수 있다. 다른 방법으로, 저장 매체는 프로세서(1100)와 일체형일 수도 있다. 프로세서 및 저장 매체는 주문형 집적회로(ASIC) 내에 상주할 수도 있다. ASIC는 사용자 단말기 내에 상주할 수도 있다. 다른 방법으로, 프로세서 및 저장 매체는 사용자 단말기 내에 개별 컴포넌트로서 상주할 수도 있다.Accordingly, the steps of the method or algorithm described in connection with the embodiments disclosed herein may be directly implemented in hardware executed by the processor 1100, a software module, or a combination of the two. The software module resides in a storage medium such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM (i.e., memory 1300 and/or storage 1600). You may. An exemplary storage medium is coupled to the processor 1100, which can read information from and write information to the storage medium. Alternatively, the storage medium may be integral with the processor 1100. The processor and storage media may reside within an application specific integrated circuit (ASIC). The ASIC may reside within the user terminal. Alternatively, the processor and storage medium may reside as separate components within the user terminal.
본 개시의 예시적인 방법들은 설명의 명확성을 위해서 동작의 시리즈로 표현되어 있지만, 이는 단계가 수행되는 순서를 제한하기 위한 것은 아니며, 필요한 경우에는 각각의 단계가 동시에 또는 상이한 순서로 수행될 수도 있다. 본 개시에 따른 방법을 구현하기 위해서, 예시하는 단계에 추가적으로 다른 단계를 포함하거나, 일부의 단계를 제외하고 나머지 단계를 포함하거나, 또는 일부의 단계를 제외하고 추가적인 다른 단계를 포함할 수도 있다.Although the exemplary methods of the present disclosure are expressed as a series of operations for clarity of description, this is not intended to limit the order in which steps are performed, and each step may be performed simultaneously or in a different order if necessary. In order to implement the method according to the present disclosure, the exemplary steps may include additional steps, other steps may be included excluding some steps, or may include additional other steps excluding some steps.
본 개시의 다양한 실시예는 모든 가능한 조합을 나열한 것이 아니고 본 개시의 대표적인 양상을 설명하기 위한 것이며, 다양한 실시예에서 설명하는 사항들은 독립적으로 적용되거나 또는 둘 이상의 조합으로 적용될 수도 있다.Various embodiments of the present disclosure are not intended to list all possible combinations, but to describe representative aspects of the present disclosure, and matters described in the various embodiments may be applied independently or may be applied in combination of two or more.
또한, 본 개시의 다양한 실시예는 하드웨어, 펌웨어(firmware), 소프트웨어, 또는 그들의 결합 등에 의해 구현될 수 있다. 하드웨어에 의한 구현의 경우, 하나 또는 그 이상의 ASICs(Application Specific Integrated Circuits), DSPs(Digital Signal Processors), DSPDs(Digital Signal Processing Devices), PLDs(Programmable Logic Devices), FPGAs(Field Programmable Gate Arrays), 범용 프로세서(general processor), 컨트롤러, 마이크로 컨트롤러, 마이크로 프로세서 등에 의해 구현될 수 있다. In addition, various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof. For implementation by hardware, one or more ASICs (Application Specific Integrated Circuits), DSPs (Digital Signal Processors), DSPDs (Digital Signal Processing Devices), PLDs (Programmable Logic Devices), FPGAs (Field Programmable Gate Arrays), general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, or the like.
본 개시의 범위는 다양한 실시예의 방법에 따른 동작이 장치 또는 컴퓨터 상에서 실행되도록 하는 소프트웨어 또는 머신-실행가능한 명령들(예를 들어, 운영체제, 애플리케이션, 펌웨어(firmware), 프로그램 등), 및 이러한 소프트웨어 또는 명령 등이 저장되어 장치 또는 컴퓨터 상에서 실행 가능한 비-일시적 컴퓨터-판독가능 매체(non-transitory computer-readable medium)를 포함한다. The scope of the present disclosure is software or machine-executable instructions (e.g., operating systems, applications, firmware, programs, etc.) that cause an operation according to the method of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium (non-transitory computer-readable medium) which stores instructions and the like and is executable on a device or a computer.

Claims (14)

  1. 의료영상 학습모델을 학습하는 장치에 있어서,In the apparatus for learning a medical image learning model,
    제1의료영상, 라벨링 데이터, 및 필터링된 의료영상을 포함하는 학습 데이터 셋을 관리하는 학습 데이터 셋 관리부,A learning data set management unit that manages a learning data set including a first medical image, labeling data, and a filtered medical image,
    제2의료영상을 생성하는 제너레이터(Generator)의 학습을 관리하는 제너레이터 학습부와, 상기 제1의료영상, 라벨링 데이터, 및 상기 제너레이터를 통해 생성된 제2영상으로부터 필터링된 의료영상을 사용하여, 상기 라벨링 데이터를 구성하는 디스크리미네이터(Discriminator)의 학습을 관리하는 디스크리미네이터 학습부를 구비하는 병변 학습부를 포함하고,Using a generator learning unit that manages learning of a generator that generates a second medical image, and a medical image filtered from the first medical image, labeling data, and the second image generated through the generator, the Including a lesion learning unit having a discreminator learning unit for managing learning of a discriminator constituting labeling data,
    상기 학습 데이터 셋 관리부는, The learning data set management unit,
    상기 제2의료영상을 필터링하여 선택적으로 상기 필터링된 의료영상으로서 제공하는 제2의료영상 필터링부를 구비하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 장치. And a second medical image filtering unit that filters the second medical image and selectively provides the filtered medical image as the filtered medical image.
  2. 제1항에 있어서The method of claim 1
    상기 학습 데이터 셋 관리부는, The learning data set management unit,
    상기 제1 및 제2의료영상에 대한 히스토그램 정보를 확인하는 히스토그램 확인부를 더 포함하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 장치. GAN-based medical image learning apparatus, characterized in that it further comprises a histogram check unit for checking histogram information on the first and second medical images.
  3. 제2항에 있어서,The method of claim 2,
    상기 제2의료영상 필터링부는,The second medical image filtering unit,
    상기 제1 및 제2의료영상에 대한 히스토그램 정보를 기반으로, 상기 제2의료영상 중, 상기 필터링된 의료영상을 결정하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 장치. A GAN-based medical image learning apparatus, characterized in that the filtered medical image is determined from among the second medical images based on histogram information on the first and second medical images.
  4. 제2항에 있어서,The method of claim 2,
    상기 제2의료영상 필터링부는,The second medical image filtering unit,
    상기 제1의료영상에 대한 히스토그램 정보를 사용하여, 기준 히스토그램 정보를 구성하고,Using the histogram information for the first medical image, construct reference histogram information,
    상기 제2의료영상에 대한 히스토그램 정보를 상기 기준 히스토그램 정보와 비교하여 상기 필터링된 의료영상을 결정하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 장치. The GAN-based medical image learning apparatus, characterized in that the filtered medical image is determined by comparing histogram information on the second medical image with the reference histogram information.
  5. 제2항에 있어서,The method of claim 2,
    상기 학습 데이터 셋 관리부는, The learning data set management unit,
    상기 제1의료영상의 오류를 확인하는 오류영상 확인부를 더 포함하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 장치. GAN-based medical image learning apparatus, characterized in that it further comprises an error image check unit to check the error of the first medical image.
  6. 제5항에 있어서,The method of claim 5,
    상기 학습 데이터 셋 관리부는, The learning data set management unit,
    상기 제1 및 제2의료영상에 대한 히스토그램 정보를 기반으로, 상기 제2의료영상을 사용하여 오류영상을 복원하는 오류영상 복원부를 더 포함하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 장치. GAN-based medical image learning apparatus, characterized in that it further comprises an error image restoration unit for restoring an error image using the second medical image, based on histogram information on the first and second medical images.
  7. 제1항에 있어서,The method of claim 1,
    상기 학습 데이터 셋 관리부는, The learning data set management unit,
    상기 제1의료영상 및 상기 제1의료영상에 대응되는 라벨링 데이터를 사용하는 지도학습 기반의 학습모델을 구성하고, Construct a supervised learning-based learning model using the first medical image and labeling data corresponding to the first medical image,
    상기 지도학습 기반의 학습모델을 사용하여 상기 제1의료영상에 대응되는 라벨링 데이터를 생성하고,Generate labeling data corresponding to the first medical image using the supervised learning-based learning model,
    미리 정해진 임계값 이상의 확률값(Probility Score)을 구비하는 상기 생성된 라벨링 데이터를 상기 제1의료영상으로서 설정하여 저장하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 장치. The GAN-based medical image learning apparatus, characterized in that the generated labeling data having a probability score equal to or greater than a predetermined threshold is set and stored as the first medical image.
  8. 의료영상 학습모델을 학습하는 방법에 있어서,In the method of learning a medical image learning model,
    제1의료영상, 라벨링 데이터, 및 필터링된 의료영상을 포함하는 학습 데이터 셋을 관리 및 저장하는 과정과, A process of managing and storing a learning data set including a first medical image, labeling data, and a filtered medical image,
    제2의료영상을 생성하는 제너레이터(Generator)와, 상기 제1의료영상, 라벨링 데이터, 및 상기 제너레이터를 통해 생성된 제2영상으로부터 필터링된 의료영상을 사용하여, 상기 라벨링 데이터를 구성하는 디스크리미네이터(Discriminator)를 포함하는 병변학습 모델의 학습을 수행하는 과정을 포함하고,A generator that generates a second medical image, and a discreminator that configures the labeling data by using the medical image filtered from the first medical image, the labeling data, and the second image generated through the generator. Including the process of performing the learning of the lesion learning model including (Discriminator),
    상기 병변학습 모델의 학습을 수행하는 과정은,The process of performing the learning of the lesion learning model,
    상기 제너레이터에서 생성된 제2의료영상을 검출 및 제공하는 과정을 포함하고,Including the process of detecting and providing a second medical image generated by the generator,
    상기 학습 데이터 셋을 관리 및 저장하는 과정은,The process of managing and storing the training data set,
    상기 제2의료영상을 필터링하여 선택적으로 상기 필터링된 의료영상으로서 저장 및 관리하는 과정을 구비하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 방법. And a process of filtering the second medical image and selectively storing and managing it as the filtered medical image.
  9. 제8항에 있어서According to claim 8
    상기 학습 데이터 셋을 관리 및 저장하는 과정은,The process of managing and storing the training data set,
    상기 제1 및 제2의료영상에 대한 히스토그램 정보를 확인하는 과정을 포함하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 방법. And checking histogram information on the first and second medical images.
  10. 제9항에 있어서,The method of claim 9,
    상기 제2의료영상을 필터링하여 선택적으로 상기 필터링된 의료영상으로서 저장 및 관리하는 과정은,The process of filtering the second medical image and selectively storing and managing it as the filtered medical image,
    상기 제1 및 제2의료영상에 대한 히스토그램 정보를 기반으로, 상기 제2의료영상 중, 상기 필터링된 의료영상을 결정하는 과정을 포함하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 방법. And determining the filtered medical image from among the second medical images based on histogram information on the first and second medical images.
  11. 제9항에 있어서,The method of claim 9,
    상기 제2의료영상을 필터링하여 선택적으로 상기 필터링된 의료영상으로서 저장 및 관리하는 과정은,The process of filtering the second medical image and selectively storing and managing it as the filtered medical image,
    상기 제1의료영상에 대한 히스토그램 정보를 사용하여, 기준 히스토그램 정보를 구성하는 과정과, A process of constructing reference histogram information by using the histogram information for the first medical image, and
    상기 제2의료영상에 대한 히스토그램 정보를 상기 기준 히스토그램 정보와 비교하여 상기 필터링된 의료영상을 결정하는 과정을 포함하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 방법. And determining the filtered medical image by comparing histogram information on the second medical image with the reference histogram information.
  12. 제9항에 있어서,The method of claim 9,
    상기 학습 데이터 셋을 관리 및 저장하는 과정은,The process of managing and storing the training data set,
    상기 제1의료영상의 오류를 확인하는 과정을 더 포함하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 방법.GAN-based medical image learning method, characterized in that it further comprises a process of checking the error of the first medical image.
  13. 제12항에 있어서,The method of claim 12,
    상기 학습 데이터 셋을 관리 및 저장하는 과정은,The process of managing and storing the training data set,
    상기 제1 및 제2의료영상에 대한 히스토그램 정보를 기반으로, 상기 제2의료영상을 사용하여 오류영상을 복원하는 과정을 더 포함하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 방법. The GAN-based medical image learning method, further comprising a process of restoring an error image using the second medical image, based on histogram information on the first and second medical images.
  14. 제8항에 있어서,The method of claim 8,
    상기 학습 데이터 셋을 관리 및 저장하는 과정은,The process of managing and storing the training data set,
    상기 제1의료영상 및 상기 제1의료영상에 대응되는 라벨링 데이터를 사용하는 지도학습 기반의 학습모델을 구성하는 과정과, A process of constructing a supervised learning-based learning model using the first medical image and labeling data corresponding to the first medical image,
    상기 지도학습 기반의 학습모델을 사용하여 상기 제1의료영상에 대응되는 라벨링 데이터를 생성하는 과정과,A process of generating labeling data corresponding to the first medical image using the supervised learning-based learning model, and
    미리 정해진 임계값 이상의 확률값(Probility Score)을 구비하는 상기 생성된 라벨링 데이터를 상기 제1의료영상으로서 설정하여 저장하는 과정을 포함하는 것을 특징으로 하는 GAN 기반의 의료영상 학습 방법.And setting and storing the generated labeling data having a probability score equal to or greater than a predetermined threshold value as the first medical image.
PCT/KR2020/013739 2019-10-08 2020-10-08 Generative adversarial network-based medical image learning method and device WO2021071286A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190124719A KR102119056B1 (en) 2019-10-08 2019-10-08 Method for learning medical image based on generative adversarial network and apparatus for the same
KR10-2019-0124719 2019-10-08

Publications (1)

Publication Number Publication Date
WO2021071286A1 true WO2021071286A1 (en) 2021-04-15

Family

ID=71088962

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/013739 WO2021071286A1 (en) 2019-10-08 2020-10-08 Generative adversarial network-based medical image learning method and device

Country Status (2)

Country Link
KR (1) KR102119056B1 (en)
WO (1) WO2021071286A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102119056B1 (en) * 2019-10-08 2020-06-05 (주)제이엘케이 Method for learning medical image based on generative adversarial network and apparatus for the same
US11076824B1 (en) * 2020-08-07 2021-08-03 Shenzhen Keya Medical Technology Corporation Method and system for diagnosis of COVID-19 using artificial intelligence
CN112381725B (en) * 2020-10-16 2024-02-02 广东工业大学 Image restoration method and device based on depth convolution countermeasure generation network
KR102477632B1 (en) * 2021-11-12 2022-12-13 프로메디우스 주식회사 Method and apparatus for training image using generative adversarial network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150002284A (en) * 2013-06-28 2015-01-07 삼성전자주식회사 Apparatus and method for detecting lesion
KR20180040287A (en) * 2016-10-12 2018-04-20 (주)헬스허브 System for interpreting medical images through machine learnings
KR20190088376A (en) * 2017-12-28 2019-07-26 (주)휴톰 Data managing method, apparatus and program for machine learning
KR20190103926A (en) * 2018-02-28 2019-09-05 서울대학교산학협력단 Apparatus for spatial normalization of medical image using deep learning and method thereof
KR102119056B1 (en) * 2019-10-08 2020-06-05 (주)제이엘케이 Method for learning medical image based on generative adversarial network and apparatus for the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150002284A (en) * 2013-06-28 2015-01-07 삼성전자주식회사 Apparatus and method for detecting lesion
KR20180040287A (en) * 2016-10-12 2018-04-20 (주)헬스허브 System for interpreting medical images through machine learnings
KR20190088376A (en) * 2017-12-28 2019-07-26 (주)휴톰 Data managing method, apparatus and program for machine learning
KR20190103926A (en) * 2018-02-28 2019-09-05 서울대학교산학협력단 Apparatus for spatial normalization of medical image using deep learning and method thereof
KR102119056B1 (en) * 2019-10-08 2020-06-05 (주)제이엘케이 Method for learning medical image based on generative adversarial network and apparatus for the same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WOO SANG-KEUN: "Generation of contrast enhanced computed tomography image using deep learning network", JOURNAL OF THE KOREA SOCIETY OF COMPUTER AND INFORMATION, THE KOREA SOCIETY OF COMPUTER AND INFORMATION VOL., KP, vol. 24, no. 3, 1 March 2019 (2019-03-01), KP, pages 41 - 47, XP055800449, ISSN: 1598-849X, DOI: 10.9708/jksci.2019.24.03.041 *

Also Published As

Publication number Publication date
KR102119056B1 (en) 2020-06-05

Similar Documents

Publication Publication Date Title
WO2021071286A1 (en) Generative adversarial network-based medical image learning method and device
US10721256B2 (en) Anomaly detection based on events composed through unsupervised clustering of log messages
WO2020246834A1 (en) Method for recognizing object in image
WO2020111754A9 (en) Method for providing diagnostic system using semi-supervised learning, and diagnostic system using same
WO2022055100A1 (en) Anomaly detection method and device therefor
WO2021071288A1 (en) Fracture diagnosis model training method and device
JP6575132B2 (en) Information processing apparatus and information processing program
US10878336B2 (en) Technologies for detection of minority events
US20150262068A1 (en) Event detection apparatus and event detection method
CN106789386A (en) The method and the error detector for network system of mistake on detection communication bus
TWI674514B (en) Malicious software recognition apparatus and method
US11580664B2 (en) Deep learning-based method and device for calculating overhang of battery
WO2019088335A1 (en) Intelligent collaboration server and system, and collaboration-based analysis method thereof
WO2020032506A1 (en) Vision detection system and vision detection method using same
WO2023101480A1 (en) Systems and methods for intelligent management of a battery
CN114244683A (en) Event classification method and device
CN113537145B (en) Method, device and storage medium for rapidly solving false detection and missing detection in target detection
WO2021002669A1 (en) Apparatus and method for constructing integrated lesion learning model, and apparatus and method for diagnosing lesion by using integrated lesion learning model
US11281912B2 (en) Attribute classifiers for image classification
US11108645B2 (en) Device interface matching using an artificial neural network
CN111800294B (en) Gateway fault diagnosis method and device, network equipment and storage medium
WO2021071258A1 (en) Mobile security image learning device and method based on artificial intelligence
EP3014865A1 (en) Apparatus and method for processing scan data in the event of esd input
WO2021015489A2 (en) Method and device for analyzing peculiar area of image by using encoder
WO2019208869A1 (en) Method and apparatus for detecting facial features by using learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20874633

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/09/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20874633

Country of ref document: EP

Kind code of ref document: A1