WO2021071286A1 - Procédé et dispositif d'apprentissage d'images médicales basés sur un réseau contradictoire génératif - Google Patents

Procédé et dispositif d'apprentissage d'images médicales basés sur un réseau contradictoire génératif Download PDF

Info

Publication number
WO2021071286A1
WO2021071286A1 PCT/KR2020/013739 KR2020013739W WO2021071286A1 WO 2021071286 A1 WO2021071286 A1 WO 2021071286A1 KR 2020013739 W KR2020013739 W KR 2020013739W WO 2021071286 A1 WO2021071286 A1 WO 2021071286A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
learning
image
data set
medical
Prior art date
Application number
PCT/KR2020/013739
Other languages
English (en)
Korean (ko)
Inventor
김원태
강신욱
이명재
김동민
남동연
Original Assignee
(주)제이엘케이
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)제이엘케이 filed Critical (주)제이엘케이
Publication of WO2021071286A1 publication Critical patent/WO2021071286A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • the present disclosure relates to a technology for learning a deep learning model, and more specifically, to a method and an apparatus for learning about a lesion through unsupervised learning.
  • Deep learning is to learn a very large amount of data, and when new data is input, the highest probability is selected based on the learning result.
  • Such deep learning can operate adaptively according to an image, and since it automatically finds a characteristic factor in the process of learning a model based on data, there are increasing attempts to utilize this in the field of artificial intelligence in recent years.
  • the technical task of the present disclosure is to provide a method and apparatus for learning a medical image based on a Generative Adversarial Network (GAN) capable of constructing a high-performance learning model using a small number of labeling data.
  • GAN Generative Adversarial Network
  • a GAN-based medical image learning apparatus may be provided.
  • the apparatus includes a learning data set management unit that manages a learning data set including a first medical image, labeling data, and a filtered medical image, and a generator that generates a second medical image.
  • a generator learning unit that manages learning of (Generator), and a discreminator configuring the labeling data by using the medical image filtered from the first medical image, the labeling data, and the second image generated through the generator.
  • a lesion learning unit having a discreminator learning unit that manages learning of (Discriminator), wherein the learning data set management unit filters the second medical image to selectively provide a second medical image as the filtered medical image.
  • a filtering unit may be provided.
  • a GAN-based medical image learning method may be provided.
  • the method is a method of learning a learning model for lesion detection, the process of managing and storing a learning data set including a first medical image, labeling data, and a filtered medical image, and generating a second medical image.
  • a discriminator constituting the labeling data by using a generator to perform the labeling data, and a medical image filtered from the first medical image, labeling data, and the second image generated through the generator.
  • the process of performing the learning of the lesion learning model includes a process of detecting and providing a second medical image generated by the generator, and managing the training data set
  • the storing process may include a process of filtering the second medical image and selectively storing and managing it as the filtered medical image.
  • a GAN-based medical image learning method and apparatus capable of constructing a high-performance learning model using a small number of labeling data may be provided.
  • FIG. 1 is a block diagram showing the configuration of a GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • FIGS. 2A and 2B are diagrams illustrating an operation of learning a lesion learning model by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • FIG. 3 is a diagram illustrating an operation of restoring an error image by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • FIGS. 4A and 4B are diagrams illustrating a medical image in a normal state and a medical image in an abnormal state used in the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart illustrating a procedure of a GAN-based medical image learning method according to an embodiment of the present disclosure.
  • FIG. 6 is a block diagram illustrating a computing system that executes a GAN-based medical image learning method and apparatus according to an embodiment of the present disclosure.
  • a component when a component is said to be “connected”, “coupled” or “connected” with another component, it is not only a direct connection relationship, but also an indirect connection relationship in which another component exists in the middle. It can also include.
  • a certain component when a certain component “includes” or “have” another component, it means that other components may be further included rather than excluding other components unless otherwise stated. .
  • first and second are used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise noted. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is referred to as a first component in another embodiment. It can also be called.
  • components that are distinguished from each other are intended to clearly describe each feature, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated into one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Therefore, even if not stated otherwise, such integrated or distributed embodiments are also included in the scope of the present disclosure.
  • components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other elements in addition to the elements described in the various embodiments are included in the scope of the present disclosure.
  • FIG. 1 is a block diagram showing the configuration of a GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • the GAN-based medical image learning apparatus may include a lesion learning unit 10 and a learning data set management unit 15.
  • the lesion learning unit 10 is a component that processes learning about the lesion learning model 110, and processes learning about the lesion learning model 110 using the learning data set provided by the learning data set management unit 15. do.
  • the lesion learning model 110 may be a learning model built through GAN-based quasi-supervised learning, and may be composed of a combination of a generator and a discriminator.
  • the lesion learning unit 10 may include a generator learning unit 11 that manages learning of the generator and a discreminator learning unit 12 that manages the learning of the discreminator.
  • the generator learning unit 11 may perform learning so that the generator can generate a medical image.
  • the discreminator learning unit 12 includes a medical image generated by the generator (hereinafter, referred to as a'second medical image') and a medical image provided by the learning data set management unit 15 (hereinafter, referred to as a'first medical image'). Medical images') and labeling data may be input, and the learning may be performed to determine labeling data among the received images.
  • the lesion learning model 110 may include a lesion region detection learning model 111 and a lesion diagnosis learning model 113.
  • the lesion region detection learning model 111 may be a learning model that receives a medical image and detects and outputs a lesion region in the medical image.
  • the lesion diagnosis learning model 113 may be a learning model that receives a region in which a lesion is generated provided from the lesion region detection learning model 111 and determines and outputs a disease type and disease severity.
  • the generator learning unit 11 may perform learning so that the generator provided in the lesion region detection learning model 111 can generate a second medical image.
  • the disc limiter learning unit 12 may perform learning so that the disc limiter receives the first medical image, the second medical image, and labeling data, and identifies the labeling data.
  • the labeling data may be an image in which a region in which a lesion occurs is detected in the first medical image.
  • the generator learning unit 11 may perform learning so that the generator provided in the lesion diagnosis learning model 113 can generate an image of an area where a lesion occurs, and the discreminator learning unit 12
  • the reminator receives the image of the area where the lesion occurs (e.g., the image of the area where the lesion is generated provided by the generator, the image of the area where the lesion is generated provided from the training data set, etc.), and labeling data. Learning can be performed to identify the data.
  • the labeling data may be an image in which the type of disease and the severity of the disease are detected based on the image of the area where the lesion occurs.
  • the generator learning unit 11 resizes the image of the lesion-prone area provided by the generator according to the size and resolution of the first and second medical images, and then provides the resized image to the discreminator for learning. You can handle it.
  • the generator while performing the learning of the lesion learning model 110, the generator constructs a predetermined image and uses it for learning of the discreminator.
  • the lesion learning apparatus according to an embodiment of the present disclosure is configured to selectively use the image generated by the generator for learning of the discreminator.
  • the generator learning unit 11 does not deliver all of the second medical images generated by the generator to the discreminator, but delivers them to the learning data set management unit 15, and then the learning data set management unit 15 provides It may be configured to select an image and provide it to the discreminator.
  • the generator learning unit 11 provides the second medical image generated in the learning process of the lesion learning model 110 to the learning data set management unit 15.
  • the learning data set management unit 15 may include a second medical image filtering unit 16 that selects a second medical image.
  • the training data set management unit 15 may include a histogram verification unit 16 that checks the histogram of the second medical image, and the histogram verification unit 16 is a second generation unit provided by the generator learning unit 11.
  • the histogram of the medical image may be checked and provided to the second medical image filtering unit 16.
  • the histogram checking unit 16 may check the histogram of the first medical image 151 included in the training data set 150 and provide the result to the second medical image filtering unit 16.
  • the second medical image filtering unit 16 compares the histogram for the first medical image 151 and the histogram for the second medical image to filter only the image corresponding to the first medical image.
  • the second medical image filtering unit 16 constructs at least one reference histogram information using the histogram for the first medical image 151, and includes at least one reference histogram information and the second medical image.
  • a second medical image exceeding at least one reference histogram information may be determined as the filtered image.
  • the second medical image filtering unit 16 may provide the determined image, that is, the filtered image, and the learning data set management unit 15 may store this as the filtered image 153 in the learning data set 150. have.
  • the learning data set 150 may include the first medical image 151, the labeling data 152, and the filtered image 153, and the discreminator learning unit 12 May perform discreminator learning of the lesion learning model 110 using the first medical image 151, the labeling data 152, and the filtered image 153 included in the training data set 150. have.
  • the learning data set management unit 15 may further include an error image management unit 18 that checks and manages whether or not an image of the first medical image has an error.
  • the error image management unit 18 may include an error image check unit 18a for checking whether an error exists in the first medical image, and an error image restoration unit 18b for restoring the error image.
  • the error image verification unit 18a analyzes the first medical image and checks whether or not there is an error. For example, the error image verification unit 18a may check the histogram of the first medical image 151 and determine whether or not there is an error in the first medical image 151 by comparing it with a predetermined criterion.
  • the error image restoration unit 18b may restore the second medical image to an image that replaces the first medical image 151 in which an error has occurred. For example, the error image restoration unit 18b may detect a second medical image having high similarity by comparing the histogram of the second medical image with the histogram of the first medical image 151 in which an error has occurred. The error image restoration unit 18b may restore the detected second medical image to the first medical image 151 in which an error has occurred.
  • the learning data set management unit 15 may check the number of labeling data included in the learning data set 150 and, if less than a predetermined number or ratio, configure and store the labeling data.
  • the learning data set management unit 15 may include a learning model (not shown) for learning labeling data 152 corresponding to the first medical image 151.
  • the learning model (not shown) may be configured based on supervised learning.
  • the learning data set management unit 15 can generate labeling data by inputting the first medical image 151 into the learning model (not shown), and the labeling data generated in this way has a relatively high accuracy. It can fall. In consideration of this, the learning data set management unit 15 checks the probability value of the labeling data output by the learning model (not shown), selects labeling data having a relatively high probability value, and labels the learning data set 150 It can be stored as data 152.
  • FIGS. 2A and 2B are diagrams illustrating an operation of learning a lesion learning model by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • FIG. 2A shows an operation of learning a lesion region detection learning model 111 by a GAN-based medical image learning apparatus.
  • the discreminator 21 learns the characteristics of the lesion area by proactively learning the lesion area.
  • the generator 22 may generate the second medical image 202.
  • the second medical image 202 generated in this way may be provided to the learning data set management unit 23, and the learning data set management unit 23 checks the histogram of the second medical image 202, and based on this, the second medical image 202 Filtering on the medical image 202 may be performed.
  • the learning data set management unit 23 may store and manage the filtered second medical image 203 in the learning data set.
  • the learning data set management unit 23 may provide the first medical image 201, the second medical image 203, and the first labeling data 205 to the disc limiter 21, and the disc limiter
  • the navigator 21 may be trained to classify the first labeling data 205 among the provided data 201, 203, and 205.
  • the first labeling data 205 may be data obtained by extracting a lesion area from a medical image.
  • the lesion region detection learning model 111 can construct a GAN-based unsupervised learning model through the discreminator 21 and the generator 22, and detects a lesion region corresponding thereto when a medical image is input. Can be printed.
  • the lesion learning unit 10 provides the first labeling data 205, that is, data extracted from the medical image, to the lesion diagnosis learning model 113 when learning the lesion area detection learning model 111. , It can be configured to link the learning of the lesion region detection learning model 111 and the lesion diagnosis learning model 113.
  • the discreminator 21 learns characteristics of the disease type and disease severity by proactively learning about the type of disease and the severity of the disease.
  • the generator 22 may generate the fourth medical image 212.
  • the fourth medical image 212 generated by the generator 22 may be an image from which a lesion area is extracted.
  • the fourth medical image 212 generated through the above-described operation may be provided to the learning data set management unit 23, and the learning data set management unit 23 checks the histogram of the fourth medical image 212, and Based on the filtering, the fourth medical image 212 may be filtered.
  • the learning data set management unit 23 may store and manage the filtered fourth medical image 213 in the learning data set.
  • the learning data set management unit 23 may provide the third medical image 211, the fourth medical image 213, and the second labeling data 215 to the disc limiter 21, and
  • the navigator 21 may be trained to classify the second labeling data 215 among the provided data 211, 213, and 215.
  • the second labeling data 215 may be data obtained by extracting a disease type and a disease severity from an image of a lesion area.
  • the lesion region provided by the lesion region detection learning model 111 may be used as an input to the lesion diagnosis learning model 113.
  • the lesion diagnosis learning model 113 can construct a GAN-based unsupervised learning model through the discreminator 21 and the generator 22, and the disease type corresponding thereto when an image of the lesion area is input, Disease severity, etc. can be detected and output.
  • FIG. 3 is a diagram illustrating an operation of restoring an error image by the GAN-based medical image learning apparatus according to an embodiment of the present disclosure.
  • the learning data set management unit 15 requests the error image verification unit 18a to check the error of the first medical image 151 in order to check whether the first medical image 151 has an error.
  • the error image verification unit 18a may check the histogram information of the first medical image 151 (301), and check whether the first medical image 151 is in error based on the confirmed histogram information. (302).
  • the histogram of the medical image 410 in a normal state may be evenly distributed, but when an error occurs in the generation, storage, or transmission of the medical image, image information for a part of the area is lost. Can be. Accordingly, when the medical image 420 is configured using the lost information, the lost information cannot be restored to the image information, so the corresponding area 425 may be configured as an abnormal area. In this way, when a medical image including the abnormal region 425 is configured, the histogram of the medical image 420 including the abnormal region 425 is not evenly distributed, and the distribution for a specific color is concentrated. Can be.
  • the error image verification unit 18a checks the histogram information of the first medical image 151, and when the corresponding histogram information indicates a distribution concentrated on a specific color, that is, the histogram of a specific color is If it exceeds the predetermined threshold, it may be determined that an error exists in the medical image.
  • the error image restoration unit 18b may be configured to replace the second medical image generated in the learning process of the lesion learning unit 10 with a first medical image in which an error exists. Specifically, the similarity of the histogram of the first medical image and the second medical image is calculated (303), and when the calculated similarity exceeds a predetermined threshold, the second medical image is restored to be replaced with the first medical image. It can be determined by the image (304).
  • the error image restoration unit 18b calculates a histogram in the horizontal and vertical directions in units of a predetermined line area of the first medical image, and detects a line area in which the histogram of a specific color exceeds a predetermined threshold. can do. Based on this, the error image restoration unit 18b may detect an error area in a horizontal direction and a vertical direction, and may distinguish an error area from a non-error area. Thereafter, the error image restoration unit 18b checks the histogram of the first medical image and the second medical image based on the non-error region, calculates the similarity of the histogram, and the calculated similarity exceeds a predetermined threshold. After extracting the second medical image, it may be replaced with the first medical image.
  • the operation of confirming an error image and determining a reconstructed image using the first and second medical images is exemplified, the present disclosure is not limited thereto, and may be variously changed or applied.
  • the operation of checking the error image and determining the reconstructed image may be applied to the third and fourth medical images.
  • FIG. 5 is a flowchart illustrating a procedure of a GAN-based medical image learning method according to an embodiment of the present disclosure.
  • the GAN-based medical image learning method may be performed by the GAN-based medical image learning apparatus described above.
  • the apparatus for learning a medical image may configure and store a learning data set.
  • the training data set is a data set for learning a lesion learning model, and may include a medical image and labeling data.
  • the lesion learning model may include a lesion region detection learning model and a lesion diagnosis learning model.
  • the lesion region detection learning model is a learning model that receives a medical image and detects and outputs a lesion area in the medical image.
  • the lesion diagnosis learning model may be a learning model that receives a region in which a lesion is generated provided from a lesion region detection learning model, determines a type of disease, a severity of a disease, and the like, and outputs it. Based on this, the medical image and the labeling data may be configured such that data provided to the lesion region detection learning model and the lesion diagnosis learning model are classified.
  • the first medical image provided to the lesion region detection learning model may be a medical image photographing a patient's body
  • the first labeling data may be an image that detects an area where a lesion occurs in the first medical image.
  • the medical image provided to the lesion diagnosis learning model may be an image (hereinafter referred to as a third medical image) in which the image of the lesion area is resized according to the size and resolution of the first medical image
  • the second labeling The data may be an image in which the type of disease and the severity of the disease are detected based on the image of the area where the lesion occurs.
  • the apparatus for learning a medical image may check and manage whether the first medical image (or the third medical image) is an error. Specifically, the apparatus for learning a medical image checks whether or not there is an error by analyzing the first medical image (or the third medical image). For example, the medical image learning apparatus may check the histogram of the first medical image (or the third medical image) and determine whether or not there is an error in the first medical image (or the third medical image) by comparing it with a predetermined criterion. .
  • the apparatus for learning a medical image may restore the second medical image to an image that replaces the first medical image (or the third medical image) in which an error has occurred.
  • the error image medical image learning apparatus compares the histogram of the second medical image (or the fourth medical image) with the histogram of the first medical image (or the third medical image) in which the error has occurred, and has a high similarity. 2
  • a medical image (or a fourth medical image) can be detected.
  • the apparatus for learning a medical image may restore the detected second medical image (or fourth medical image) to a first medical image (or a third medical image) in which an error has occurred.
  • the medical image learning apparatus may check the number of labeling data included in the training data set, and if it is included in less than a predetermined number or ratio, configure and store the labeling data.
  • the apparatus for learning medical images may include a learning model (not shown) for learning labeling data corresponding to a first medical image.
  • the learning model (not shown) may be configured based on supervised learning.
  • the apparatus for learning medical images may generate labeling data by inputting the first medical image to the learning model (not shown), and the labeling data generated in this manner may be relatively inferior in accuracy.
  • the apparatus for learning a medical image may check a probability value of labeling data output by a learning model (not shown), select labeling data having a relatively high probability value, and store it as labeling data of a training data set.
  • step S520 the apparatus for learning a medical image processes learning about a lesion learning model by using the learning data set.
  • the lesion learning model may be a learning model built through GAN-based quasi-supervised learning, and may be composed of a combination of a generator and a discriminator.
  • the apparatus for learning a medical image may perform learning of the generator and the discreminator, respectively.
  • the apparatus for learning a medical image may perform learning so that a generator can generate a medical image.
  • the apparatus for learning a medical image may perform learning of the discreminator.
  • the apparatus for learning a medical image may be trained to receive a medical image generated by a generator, a medical image of a learning data set, and labeling data, and to determine labeling data from the received images.
  • the medical image learning apparatus may perform learning so that the generator provided in the lesion region detection learning model generates a second medical image, and in step S522 At, the discreminator may receive the first medical image, the second medical image, and the first labeling data, and perform learning so that the first labeling data can be identified.
  • the first labeling data may be an image obtained by detecting a region in which a lesion occurs in the first medical image.
  • the medical image learning apparatus may perform learning so that the generator provided in the lesion diagnosis learning model can generate an image (hereinafter, referred to as a'fourth medical image') of the area where the lesion occurs.
  • the discreminator receives the fourth medical image, the image (ie, the third medical image) of the lesion-prone area provided from the training data set, and the second labeling data, and receives the second labeling data. Learning can be performed so that you can identify it.
  • the second labeling data may be an image in which the type of disease and the severity of the disease are detected based on the image of the area where the lesion occurs.
  • the medical image learning apparatus resizes the image of the lesion-prone area provided by the generator according to the size and resolution of the first and second medical images, and then provides the resized image to the discreminator to process the learning. I can.
  • the generator while performing the learning of the lesion learning model, it is exemplified that the generator constructs a predetermined image and uses it for learning of the discreminator.
  • the apparatus for learning a medical image is configured to selectively use the image generated by the generator for learning of the discreminator.
  • the medical image learning apparatus does not deliver all the second medical images (or fourth medical images) generated by the generator to the discreminator, but selectively transmits the second medical images (or fourth medical images) to the disc limiter. It can be configured to provide to the nator.
  • the apparatus for learning a medical image may filter the medical image generated in step S520. Specifically, an operation of checking the second medical image generated in the learning process of the lesion region detection learning model and selecting the second medical image may be performed.
  • the apparatus for learning a medical image may check the histogram of the first medical image and the histogram of the second medical image, compare them, and filter only the image corresponding to the first medical image.
  • the medical image learning apparatus configures at least one reference histogram information by using the histogram for the first medical image, compares the at least one reference histogram information with the histogram for the second medical image, The second medical image exceeding the reference histogram information may be determined as the filtered image.
  • the apparatus for learning a medical image may check the fourth medical image generated during the learning process of the lesion diagnosis learning model, and may perform an operation of selecting the fourth medical image.
  • the apparatus for learning a medical image may check the histogram of the third medical image and the histogram of the fourth medical image, compare them, and filter only the image corresponding to the third medical image.
  • the medical image learning apparatus configures at least one reference histogram information using the histogram for the third medical image, compares the at least one reference histogram information with the histogram for the fourth medical image, The fourth medical image exceeding the reference histogram information may be determined as the filtered image.
  • the apparatus for learning a medical image may configure and provide the determined image, that is, the filtered image as learning data (S540).
  • the apparatus for learning a medical image may proceed to step S510 to store the filtered image in the learning data set.
  • the learning data set may include the first medical image, the third medical image, the first labeling data, the second labeling data, the filtered second medical image, and the filtered fourth medical image.
  • the apparatus for learning a medical image may perform discreminator learning of a lesion learning model using data included in the training data set.
  • FIG. 6 is a block diagram illustrating a computing system that executes a GAN-based medical image learning method and apparatus according to an embodiment of the present disclosure.
  • the computing system 1000 includes at least one processor 1100, a memory 1300, a user interface input device 1400, a user interface output device 1500, and storage connected through a bus 1200. (1600), and a network interface (1700).
  • the processor 1100 may be a central processing unit (CPU) or a semiconductor device that processes instructions stored in the memory 1300 and/or the storage 1600.
  • the memory 1300 and the storage 1600 may include various types of volatile or nonvolatile storage media.
  • the memory 1300 may include read only memory (ROM) and random access memory (RAM).
  • the steps of the method or algorithm described in connection with the embodiments disclosed herein may be directly implemented in hardware executed by the processor 1100, a software module, or a combination of the two.
  • the software module resides in a storage medium such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM (i.e., memory 1300 and/or storage 1600). You may.
  • An exemplary storage medium is coupled to the processor 1100, which can read information from and write information to the storage medium.
  • the storage medium may be integral with the processor 1100.
  • the processor and storage media may reside within an application specific integrated circuit (ASIC).
  • the ASIC may reside within the user terminal.
  • the processor and storage medium may reside as separate components within the user terminal.
  • the exemplary methods of the present disclosure are expressed as a series of operations for clarity of description, this is not intended to limit the order in which steps are performed, and each step may be performed simultaneously or in a different order if necessary.
  • the exemplary steps may include additional steps, other steps may be included excluding some steps, or may include additional other steps excluding some steps.
  • various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof.
  • one or more ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, or the like.
  • the scope of the present disclosure is software or machine-executable instructions (e.g., operating systems, applications, firmware, programs, etc.) that cause an operation according to the method of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium (non-transitory computer-readable medium) which stores instructions and the like and is executable on a device or a computer.
  • a non-transitory computer-readable medium non-transitory computer-readable medium

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente invention concerne un dispositif d'apprentissage d'images médicales basé sur un RAG. Le dispositif d'apprentissage d'images médicales basé sur un RAG comprend : une unité de gestion d'ensemble de données d'apprentissage pour gérer un ensemble de données d'apprentissage comprenant une première image médicale, des données étiquetées et une image médicale filtrée ; et une unité d'apprentissage de lésion ayant une partie d'apprentissage de générateur, pour gérer l'apprentissage d'un générateur pour générer une seconde image médicale, et une partie d'apprentissage de discriminateur pour gérer l'apprentissage d'un discriminateur, pour construire les données étiquetées, au moyen de la première image médicale, les données étiquetées et l'image médicale filtrée à partir de la seconde image générée au moyen du générateur, l'unité de gestion d'ensemble de données d'apprentissage pouvant avoir une seconde partie de filtrage d'image médicale pour filtrer la seconde image médicale et fournir sélectivement celle-ci en tant qu'image médicale filtrée.
PCT/KR2020/013739 2019-10-08 2020-10-08 Procédé et dispositif d'apprentissage d'images médicales basés sur un réseau contradictoire génératif WO2021071286A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0124719 2019-10-08
KR1020190124719A KR102119056B1 (ko) 2019-10-08 2019-10-08 생성적 적대 신경망 기반의 의료영상 학습 방법 및 장치

Publications (1)

Publication Number Publication Date
WO2021071286A1 true WO2021071286A1 (fr) 2021-04-15

Family

ID=71088962

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/013739 WO2021071286A1 (fr) 2019-10-08 2020-10-08 Procédé et dispositif d'apprentissage d'images médicales basés sur un réseau contradictoire génératif

Country Status (2)

Country Link
KR (1) KR102119056B1 (fr)
WO (1) WO2021071286A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102119056B1 (ko) * 2019-10-08 2020-06-05 (주)제이엘케이 생성적 적대 신경망 기반의 의료영상 학습 방법 및 장치
US11076824B1 (en) * 2020-08-07 2021-08-03 Shenzhen Keya Medical Technology Corporation Method and system for diagnosis of COVID-19 using artificial intelligence
CN112381725B (zh) * 2020-10-16 2024-02-02 广东工业大学 基于深度卷积对抗生成网络的图像修复方法及装置
KR102477632B1 (ko) * 2021-11-12 2022-12-13 프로메디우스 주식회사 적대적 생성 신경망을 이용한 영상 학습 장치 및 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150002284A (ko) * 2013-06-28 2015-01-07 삼성전자주식회사 병변 검출 장치 및 방법
KR20180040287A (ko) * 2016-10-12 2018-04-20 (주)헬스허브 기계학습을 통한 의료영상 판독 및 진단 통합 시스템
KR20190088376A (ko) * 2017-12-28 2019-07-26 (주)휴톰 학습용 데이터 관리방법, 장치 및 프로그램
KR20190103926A (ko) * 2018-02-28 2019-09-05 서울대학교산학협력단 딥러닝을 이용한 의료영상의 공간 정규화 장치 및 그 방법
KR102119056B1 (ko) * 2019-10-08 2020-06-05 (주)제이엘케이 생성적 적대 신경망 기반의 의료영상 학습 방법 및 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150002284A (ko) * 2013-06-28 2015-01-07 삼성전자주식회사 병변 검출 장치 및 방법
KR20180040287A (ko) * 2016-10-12 2018-04-20 (주)헬스허브 기계학습을 통한 의료영상 판독 및 진단 통합 시스템
KR20190088376A (ko) * 2017-12-28 2019-07-26 (주)휴톰 학습용 데이터 관리방법, 장치 및 프로그램
KR20190103926A (ko) * 2018-02-28 2019-09-05 서울대학교산학협력단 딥러닝을 이용한 의료영상의 공간 정규화 장치 및 그 방법
KR102119056B1 (ko) * 2019-10-08 2020-06-05 (주)제이엘케이 생성적 적대 신경망 기반의 의료영상 학습 방법 및 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WOO SANG-KEUN: "Generation of contrast enhanced computed tomography image using deep learning network", JOURNAL OF THE KOREA SOCIETY OF COMPUTER AND INFORMATION, THE KOREA SOCIETY OF COMPUTER AND INFORMATION VOL., KP, vol. 24, no. 3, 1 March 2019 (2019-03-01), KP, pages 41 - 47, XP055800449, ISSN: 1598-849X, DOI: 10.9708/jksci.2019.24.03.041 *

Also Published As

Publication number Publication date
KR102119056B1 (ko) 2020-06-05

Similar Documents

Publication Publication Date Title
WO2021071286A1 (fr) Procédé et dispositif d'apprentissage d'images médicales basés sur un réseau contradictoire génératif
US10721256B2 (en) Anomaly detection based on events composed through unsupervised clustering of log messages
WO2020111754A9 (fr) Procédé pour fournir un système de diagnostic utilisant l'apprentissage semi-supervisé, et système de diagnostic l'utilisant
WO2021071288A1 (fr) Procédé et dispositif de formation de modèle de diagnostic de fracture
JP6575132B2 (ja) 情報処理装置及び情報処理プログラム
WO2022055100A1 (fr) Procédé de détection d'anomalies et dispositif associé
US10878336B2 (en) Technologies for detection of minority events
US20150262068A1 (en) Event detection apparatus and event detection method
CN106789386A (zh) 检测通信总线上错误的方法以及用于网络系统的检错器
WO2021095991A1 (fr) Dispositif et procédé de génération d'une image de défaut
CN115065798B (zh) 一种基于大数据的视频分析监控系统
US11580664B2 (en) Deep learning-based method and device for calculating overhang of battery
WO2019088335A1 (fr) Serveur et système de collaboration intelligent, et procédé d'analyse associé basé sur la collaboration
TW202016784A (zh) 惡意軟體辨識裝置及方法
CN113537145B (zh) 目标检测中误、漏检快速解决的方法、装置及存储介质
WO2021002669A1 (fr) Appareil et procédé pour construire un modèle d'apprentissage de lésion intégré, et appareil et procédé pour diagnostiquer une lésion à l'aide d'un modèle d'apprentissage de lésion intégré
US11281912B2 (en) Attribute classifiers for image classification
US11108645B2 (en) Device interface matching using an artificial neural network
CN111800294B (zh) 网关故障诊断方法、装置、网络设备及存储介质
WO2021071258A1 (fr) Dispositif et procédé d'apprentissage d'image de sécurité mobile basés sur l'intelligence artificielle
WO2023101480A1 (fr) Systèmes et procédés de gestion intelligente d'une batterie
US20220148193A1 (en) Adaptive object recognition apparatus and method in fixed closed circuit television edge terminal using network
WO2021015489A2 (fr) Procédé et dispositif d'analyse d'une zone d'image singulière à l'aide d'un codeur
WO2019208869A1 (fr) Appareil et procédé de détection des caractéristiques faciales à l'aide d'un apprentissage
CN115730205A (zh) 一种配置决策装置的方法、装置及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20874633

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/09/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20874633

Country of ref document: EP

Kind code of ref document: A1