WO2021118068A1 - Medical image generation method and device using same - Google Patents

Medical image generation method and device using same Download PDF

Info

Publication number
WO2021118068A1
WO2021118068A1 PCT/KR2020/015487 KR2020015487W WO2021118068A1 WO 2021118068 A1 WO2021118068 A1 WO 2021118068A1 KR 2020015487 W KR2020015487 W KR 2020015487W WO 2021118068 A1 WO2021118068 A1 WO 2021118068A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature vector
image
bone
generating
candidate
Prior art date
Application number
PCT/KR2020/015487
Other languages
French (fr)
Korean (ko)
Inventor
배병욱
정규환
Original Assignee
주식회사 뷰노
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020190162678A external-priority patent/KR102177567B1/en
Priority claimed from KR1020200146676A external-priority patent/KR102556646B1/en
Application filed by 주식회사 뷰노 filed Critical 주식회사 뷰노
Publication of WO2021118068A1 publication Critical patent/WO2021118068A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references

Definitions

  • the present invention relates to a method for generating a medical image including a bone image and an apparatus using the same.
  • An artificial neural network that extracts a feature vector of an input medical image and calculates a result (eg, age) corresponding to the feature vector based thereon may be trained based on prepared learning data. In this case, there is a high possibility that the performance will decrease for the age at which some training data is omitted.
  • the reading accuracy of the artificial neural network may decrease because the training data is missing.
  • An object of the present invention is to provide a means for assisting in reading a bone image by generating a bone image corresponding to an age that is not stored in a database using a previously stored bone image.
  • Another object of the present invention is to provide a reading model learning method capable of improving the reading accuracy of a reading model that extracts a feature vector from a bone image.
  • the synthesis method of the method for generating a bone image according to the present invention is not limitedly applied to a bone image, and each medical image has a predetermined grade (eg, an arbitrary numerical factor such as age, disease progression, etc.). It can be extended to any given medical image.
  • the present invention can be applied to a method of generating a medical image corresponding to a non-existent grade from an existing medical image.
  • the characteristic configuration of the present invention for achieving the object of the present invention as described above and for realizing the characteristic effects of the present invention to be described later is as follows.
  • a method for generating a bone image performed by a computing device includes the steps of: (a) receiving a target bone age; (b) acquiring a first candidate bone image and a second candidate bone image for generating a bone image corresponding to the target bone age; (c) extracting a first feature vector and a second feature vector for each of the first candidate bone image and the second candidate bone image based on the pre-trained read model; (d) generating a third feature vector corresponding to the target bone age by synthesizing the first feature vector and the second feature vector; and (e) generating a target bone image corresponding to the target bone age based on the third feature vector.
  • the pre-trained reading model is pre-trained to output a feature vector of an input goal image based on a processed learning goal image, and the processed learning goal image is a first obtained learning goal image.
  • the third bone age corresponding to the age between the first bone age corresponding to the first learning goal image and the second bone age corresponding to the second learning goal image It may be generated by generating a third learning goal image corresponding to .
  • a computing device for generating a bone image includes: a communication unit for receiving an input for a target bone age; and a processor generating a target bone image corresponding to the target bone age, wherein the processor is configured to generate a first feature vector for each of the first candidate bone image and the second candidate bone image based on a pre-trained reading model. and obtaining a second feature vector, synthesizing the first feature vector and the second feature vector to generate a third feature vector corresponding to the target bone age, and based on the third feature vector, the target bone age It is possible to generate a target bone image corresponding to .
  • the conventional medical images used in hospitals can be used as they are, so it goes without saying that the method of the present invention is not dependent on a specific type of image or platform.
  • a medical image corresponding to a predetermined grade (a medical image of a non-existing grade) may be generated by synthesizing a feature vector extracted from an existing medical image.
  • FIG. 1 is a conceptual diagram schematically illustrating an exemplary configuration of a computing device for performing a method for generating a medical image including a bone image according to the present invention.
  • FIG. 2 is an exemplary block diagram illustrating hardware or software components of a computing device performing a method for generating a bone image according to the present invention.
  • FIG. 3 is a flowchart illustrating a method for generating a bone image according to an embodiment.
  • FIG. 4 is a diagram illustrating an example in which a method for generating a bone image according to an embodiment is performed.
  • FIG. 5 is a diagram illustrating a distribution of feature vectors according to age.
  • FIG. 6 is a diagram illustrating an example of an interface to which a method for generating a bone image according to an embodiment is applied.
  • FIG. 7 is an exemplary block diagram illustrating hardware or software components of a computing device performing a method for generating a medical image according to the present invention.
  • FIG. 8 is a flowchart illustrating a method of generating a medical image according to an exemplary embodiment.
  • image or "image data” as used throughout the present description and claims refers to multidimensional data composed of discrete image elements (e.g., pixels in a two-dimensional image, voxels in a three-dimensional image). refers to For example, “imaging” means X-ray imaging, (cone-beam) computed tomography, magnetic resonance imaging (MRI), ultrasound or any other medical imaging known in the art. It may be a medical image of a subject, ie, a subject, collected by the system. Also, the image may be provided in a non-medical context, for example, a remote sensing system, an electron microscopy, and the like.
  • imaging means X-ray imaging, (cone-beam) computed tomography, magnetic resonance imaging (MRI), ultrasound or any other medical imaging known in the art. It may be a medical image of a subject, ie, a subject, collected by the system. Also, the image may be provided in a non-medical context, for example, a remote sensing system, an electron
  • 'image' refers to a viewable image (eg, displayed on a video screen) or an image (eg, a file corresponding to a pixel output of a CT, MRI detector, etc.)
  • DICOM Digital Imaging and Communications in Medicine
  • ACR American Society of Radiological Medicine
  • NEMA National Electrical Engineers Association
  • 'picture archiving and communication system is a term that refers to a system that stores, processes, and transmits in accordance with the DICOM standard, X-ray, CT , medical imaging images acquired using digital medical imaging equipment such as MRI are stored in DICOM format and can be transmitted to terminals inside and outside the hospital through a network, to which reading results and medical records can be added.
  • 'learning' or 'learning' is a term referring to performing machine learning through computing according to a procedure, and a mental action such as human educational activity. It will be understood by those of ordinary skill in the art that this is not intended to be a reference.
  • FIG. 1 is a conceptual diagram schematically illustrating an exemplary configuration of a computing device for performing a method for generating a medical image including a bone image according to the present invention.
  • a computing device 100 includes a communication unit 110 and a processor 120 , and is directly or indirectly connected to an external computing device (not shown) through the communication unit 110 . can communicate effectively.
  • the computing device 100 includes typical computer hardware (eg, a computer processor, memory, storage, input device and output device, device that may include other components of a conventional computing device; router, switch, etc.) Electronic communication devices; electronic information storage systems, such as network-attached storage (NAS) and storage area networks (SANs)) and computer software (i.e., that enable the computing device to function in a particular way). instructions) to achieve the desired system performance.
  • typical computer hardware eg, a computer processor, memory, storage, input device and output device, device that may include other components of a conventional computing device; router, switch, etc.
  • Electronic communication devices e.g, a computer processor, memory, storage, input device and output device, device that may include other components of a conventional computing device; router, switch, etc.
  • Electronic communication devices e.g., a computer processor, memory, storage, input device and output device, device that may include other components of a conventional computing device; router, switch, etc.
  • electronic communication devices e.g., electronic communication devices
  • the communication unit 110 of such a computing device may transmit and receive a request and a response to and from another computing device that is interlocked.
  • a request and a response may be made by the same transmission control protocol (TCP) session.
  • TCP transmission control protocol
  • UDP user datagram protocol
  • the communication unit 110 may include a keyboard, a mouse, other external input devices, printers, displays, and other external output devices for receiving commands or instructions.
  • the processor 120 of the computing device may include a micro processing unit (MPU), a central processing unit (CPU), a graphics processing unit (GPU) or a tensor processing unit (TPU), a cache memory, and a data bus.
  • MPU micro processing unit
  • CPU central processing unit
  • GPU graphics processing unit
  • TPU tensor processing unit
  • cache memory and a data bus.
  • MPU micro processing unit
  • CPU central processing unit
  • GPU graphics processing unit
  • TPU tensor processing unit
  • cache memory a cache memory
  • data bus may include a hardware configuration such as In addition, it may further include an operating system, a software configuration of an application for performing a specific purpose.
  • the image of the present invention is a bone image
  • the bone image is a bone X-ray image as an example.
  • the scope of the present invention is not limited thereto, and it is applied to all general types of bone images. A person skilled in the art will readily understand that this is possible and can be applied in the process of reading the bone age of a target image.
  • the present invention can be extended to medical images that have been graded in addition to bone images, and in an example of a graded medical image, the degree of progression of diabetic retinopathy for the eye included in the image is graded and labeled
  • a person skilled in the art will understand that the present invention may include a fundus image that has been developed, but the scope of the present invention is not limited thereto, and is applicable to a medical image graded according to any condition.
  • FIG. 2 is an exemplary block diagram illustrating hardware or software components of a computing device performing a method for generating a bone image according to the present invention.
  • the individual modules shown in FIG. 2 may be implemented by, for example, the communication unit 110 or the processor 120 included in the computing device 100 or the communication unit 110 and the processor 120 interworking. technicians will be able to understand.
  • the computing device 100 may include the target bone age information receiving module 210 as a component thereof.
  • the target bone age information receiving module 210 may receive target bone age information from an input device included in the computing device and an interlocked user terminal.
  • the feature vector acquisition module 220 may acquire a feature vector for generating a bone image corresponding to the target bone age information.
  • the feature vector acquisition module 220 may include a pre-trained first reading model to extract a feature vector from an input bone image and read a bone age based on the extracted feature vector.
  • the first read model is, for example, a Deep Neural Network (DNN), a Convolutional Neural Network (CNN), a Deep Convolution Neural Network (DCNN), a Recurrent Neural Network (RNN), a Restricted Boltzmann Machine (RBM), or a Deep Belief Network (DBN). ), a Single Shot Detector (SSD), You Only Look Once (YOLO), etc. may be used, as will be understood by those skilled in the art.
  • DNN Deep Neural Network
  • CNN Convolutional Neural Network
  • DCNN Deep Convolution Neural Network
  • RNN Recurrent Neural Network
  • RBM Restricted Boltzmann Machine
  • DBN Deep Belief Network
  • SSD Single Shot Detector
  • YOLO You Only Look Once
  • the type of artificial neural network that can be used in the present invention is not limited to the presented example, and it can be learned to extract a feature vector from a bone image based on the labeled learning data, and to read the bone age based on the extracted feature vector. It will be understood by those skilled in the art that it may include any artificial neural network.
  • the first reading model included in the feature vector acquisition module 220 extracts a feature vector for the input bone image based on the training bone image labeled with the corresponding age, and adds the extracted feature vector to the image. It may be learned in advance to read the bone age corresponding to the input bone image based on the input bone image.
  • additional learning may be performed on the first reading model based on a synthesized learning bone image generated by synthesizing two or more existing learning bone images. For example, if the pre-existing learning bone image includes bone images corresponding to 1 year old and 3 years old, in addition to the existing bone images corresponding to 1 year old and 3 years old, the bone images corresponding to 1 year and 3 years old are included. Learning of the first reading model may be performed based on the bone image corresponding to the age of 2 generated based on the synthesis of the bone image.
  • a person of ordinary skill in the art knows that a method of generating a synthetic learning goal image based on an existing learning goal image may be performed based on the operations of the synthesizing module 230 and the bone image generation and transmission module 240 to be described below.
  • the method of generating the synthetic learning bone image is not limited to the example shown, and synthetic learning bone images of various ages may be generated based on the existing learning bone image. For example, through the learning bone images corresponding to 1 year old and 3 years old, synthetic learning bone images corresponding to 1.6 years old, 2.2 years old, 2.5 years old, etc. may be generated, which are the synthesis module 230 and the bone to be described below. It may be performed based on an operation method of the image generation and transmission module 240 .
  • the feature vector acquisition module 220 may acquire a feature vector for the input valley image through an operation of acquiring an output of an intermediate layer in the first read model as a feature vector.
  • the feature vector acquisition module 220 may acquire at least two or more candidate bone images corresponding to the target bone age, and extract a feature vector corresponding to the bone image through a pre-learned first reading model. For example, when the target bone age is 4 years old, the feature vector acquisition module acquires a first candidate bone image corresponding to 3 years of age and a second candidate bone image corresponding to 5 years of age, and the first candidate bone image corresponding to each image A feature vector and a second feature vector may be extracted.
  • the feature vector acquisition module 220 may acquire a first feature vector and a second feature vector corresponding to the target bone age from among the databased feature vectors.
  • feature vectors corresponding to each age may be databased, and when the target age is 4 years old, the feature vector acquisition module 220 may provide a feature vector corresponding to 3 years old and a feature vector corresponding to 5 years old. can be obtained from the database.
  • the synthesis module 230 may generate a third feature vector corresponding to the target age by synthesizing the first feature vector and the second feature vector obtained through the feature vector acquisition module 220 . Specifically, the synthesis module 230 may preferentially determine whether to synthesize the first feature vector and the second feature vector based on a predetermined condition. For example, the synthesis module 230 may determine whether to synthesize the first feature vector and the second feature vector based on Equation (1).
  • L 1 may represent a first label (age) corresponding to the first feature vector
  • L 2 may represent a second label (age) corresponding to the second feature vector
  • m may represent a preset threshold value
  • m may be a threshold value set in advance for the accuracy of the synthesis result. Specifically, compared to the case of using a feature vector corresponding to 1 year old and a feature vector corresponding to 19 years old to generate a feature vector corresponding to 10 years old, a feature vector corresponding to 9 years old and a feature vector corresponding to 11 years old Using , more accurate results can be obtained. Accordingly, the synthesis module 230 may determine whether to synthesize the feature vector according to a predetermined threshold value.
  • the synthesis module 230 may determine whether to proceed with the synthesis according to a predetermined probability.
  • the synthesis module 230 may synthesize the feature vector and a label corresponding to the feature vector based on Equation 2 based on Equation 2 .
  • F 1 , F 2 , F 3 are a first candidate bone image, a second candidate bone image, and a first feature vector, a second feature vector, and a third feature vector respectively corresponding to the bone image corresponding to the target age
  • L 1 , L 2 and L 3 denote a first label (age) corresponding to the first feature vector, a second label (age) corresponding to the second feature vector, and a third label (age) corresponding to the third feature vector, respectively
  • may be any variable indicating the degree of synthesis, for example, ⁇ may be a variable following a beta distribution.
  • a third feature vector and a third label (age) corresponding thereto may be determined.
  • the method of performing the synthesis by the synthesis module 230 may include any data synthesis method including a method using a linear interpolation or a fully connected layer.
  • the bone image generation and transmission module 240 may generate a bone image corresponding to the target age based on the third feature vector generated by the synthesis module 230 . More specifically, the bone image generation and transmission module 240 uses the pre-trained second reading model to generate the target bone image corresponding to the above characteristic information by inputting feature information corresponding to the target bone image to be generated. Thus, a target bone image may be generated.
  • the second reading model may include an artificial neural network model such as an Autoencoder or a Generative Adversarial Network (GAN) (eg, CycleGAN, BiBigGAN), based on which a bone image corresponding to the target age will be generated.
  • GAN Generative Adversarial Network
  • the bone image generation and transmission module 240 inputs a third feature vector (for example, a feature vector generated based on the synthesis of feature vectors extracted from a candidate bone image) into the second read model, and the third feature vector is A target bone image (a bone image corresponding to the target age) may be generated.
  • a third feature vector for example, a feature vector generated based on the synthesis of feature vectors extracted from a candidate bone image
  • the artificial neural network included in the second reading model may be pre-trained to generate a target bone image corresponding to the feature vector by inputting an initial bone image together with a feature vector corresponding to the target bone image to be generated.
  • the initial bone image may be input as a target bone image, but is not limited thereto, and any bone image may be provided.
  • the bone image generation and transmission module 240 inputs the third feature vector and the first candidate bone image or the second candidate bone image, which is the basis for the generation of the third feature vector, into the second read model, and adds it to the third feature vector.
  • a corresponding target bone image may be generated.
  • the second reading model trained to generate a target bone image by inputting an initial bone image together with a third feature vector can generate a target bone image with improved quality compared to the case where only the feature vector is used. .
  • the bone image generation and transmission module 240 is not limited to the presented example, and may be implemented through any method for generating a bone image based on a feature vector.
  • the bone image generation and transmission module 240 may store the generated bone image in a database or provide it to an external entity.
  • the bone image generation and transmission module 240 may provide the generated bone image to an external entity using a predetermined display device or the like, or through a communication unit provided.
  • the external entity includes a user, an administrator, and a medical professional in charge of the computing device 100, but it should be understood that any subject that requires a bone image for the target age is included in addition to this. something to do.
  • the external entity may be an external AI device including a separate artificial intelligence (AI) hardware and/or software module utilizing the bone image.
  • AI artificial intelligence
  • 'external' in the external entity is not intended to exclude an embodiment in which the AI hardware and/or software module using the bone image is integrated into the computing device 100, but the method of the present invention It should be noted that the bone image, which is the result of the performing hardware and/or software module, is used to suggest that it can be utilized as input data of other methods. That is, the external entity may be the computing device 100 itself.
  • FIG. 2 Although the components shown in FIG. 2 are exemplified as being realized in one computing device for convenience of description, it will be understood that a plurality of computing devices 100 performing the method of the present invention may be configured to interwork with each other.
  • FIG. 3 is a flowchart illustrating a method for generating a bone image according to an embodiment.
  • the computing device may receive a target bone age through the communication unit in step 310 .
  • the computing device may acquire a first candidate bone image and a second candidate bone image for generating a bone image corresponding to the target bone age in operation 320 . If the target bone age is included between the bone age corresponding to the first candidate bone image and the bone age corresponding to the second candidate bone image, the computing device is configured to include the bone age of the first candidate bone image and the bone age of the second candidate bone image. It may be determined whether the interval between ages is smaller than a preset predetermined interval. When the interval between the bone age of the first candidate bone image and the bone age of the second candidate bone image is less than or equal to a predetermined interval, the computing device may perform the subsequent step 330 .
  • the computing device is configured to configure the first candidate bone to be equal to or smaller than the predetermined interval.
  • a process of re-acquiring at least one of the image and the second candidate bone image may be performed. This is to improve the accuracy of the calculation result by using the bone image corresponding to the age close to the target bone age in the process of generating the image corresponding to the target bone age, as described above.
  • the computing device may extract a first feature vector and a second feature vector for each of the first candidate bone image and the second candidate bone image based on the first read model learned in step 330 .
  • the pre-trained first reading model may be an artificial neural network trained to extract a feature vector from the input bone image and read the age of the input bone image based thereon.
  • the first reading model may be trained to output a feature vector of an input bone image based on the processed training bone image.
  • the first reading model used in the present invention may be learned using a processed learning bone image to improve the accuracy of reading.
  • the processed learning goal image may further include a third learning goal image generated by synthesizing the first learning goal image and the second learning goal image obtained from the learning goal image together with the existing learning goal image.
  • the third learning bone image may correspond to a third bone age corresponding to an age between the first bone age corresponding to the first learning bone image and the second bone age corresponding to the second learning bone image.
  • the processed learning bone image may include all elements corresponding to individual ages compared to existing learning data. The present invention has a remarkable effect in that it is possible to provide a reading result with higher accuracy through a reading model learned using a processed learning bone image.
  • the computing device may generate a third feature vector corresponding to the target bone age by synthesizing the first feature vector and the second feature vector in operation 340 .
  • the computing device may generate the third feature vector by interpolating the first feature vector and the second feature vector, and the method of generating the third feature vector is not limited to the presented example as described above.
  • the computing device may generate a target bone image corresponding to the age of the target bone based on the third feature vector in operation 350 .
  • FIG. 4 is a diagram illustrating an example in which a method for generating a bone image according to an embodiment is performed.
  • the system for reading bone age through the image may provide a reference bone image corresponding to an individual age to the reader, and the reader may read the age of the input bone image through comparison with the reference bone image. If the reference bone image 410 corresponding to the age of 3 is absent among the reference bone images stored in the database as shown on the left, it may be difficult for the reader to read the bone age of the corresponding age group. In addition, in the case of an infant with a very rapid development rate, since the difference in bone growth can be large even after a few months, even if the reference bone image corresponding to the age of 1 and the reference bone image corresponding to the age of 2 are used, 1 to 2 years of age Predicting the bone age of infants between the ages of three can be very difficult.
  • the computing device 420 receives the first candidate bone image 431 and the second candidate bone image 432 corresponding to the two-year-old in order to generate the missing three-year-old reference bone image, and based on this, A bone image 440 corresponding to the age may be generated.
  • the computing device 420 may generate a bone image corresponding to the target age based on a feature vector previously stored through age-based labeling in a database as well as the suggested method. In this case, in the process of generating the bone image corresponding to the target age, the computing device 420 may generate the bone image only through information on the target bone age without receiving the bone image.
  • FIG. 5 is a diagram illustrating a distribution of feature vectors according to age.
  • a graph 510 may be a distribution of a feature vector for a male bone image
  • a graph 520 may be a distribution of a feature vector for a female bone image.
  • Each of the points 511 and 512 included in the graphs 510 and 520 may correspond to a feature vector of an individual bone image, and the color of the points 511 and 512 may indicate a corresponding age.
  • points 511 and 512 corresponding to similar ages are distributed in adjacent areas. Considering this distribution, it can be understood that the feature vector calculated through the synthesis of feature vectors of adjacent ages can reveal the characteristics of the target age.
  • FIG. 6 is a diagram illustrating an example of an interface to which a method for generating a bone image according to an embodiment is applied.
  • reference bone images 621 , 622 , and 623 of a similar age to the current bone image 620 may be provided through the interface 610 .
  • the reference bone images 621 , 622 , and 623 corresponding to each age may be pre-stored in a database.
  • the reader may request to generate a reference bone image corresponding to the corresponding age based on a user input to the graphic object 630 .
  • the graphic object 640 for inputting the target age may be displayed on the screen.
  • the computing device may generate a reference bone image 650 corresponding to the target age and display it on the screen.
  • FIG. 7 is an exemplary block diagram illustrating hardware or software components of a computing device performing a method for generating a medical image according to the present invention.
  • the individual modules shown in FIG. 7 may be implemented by, for example, the communication unit 110 or the processor 120 included in the computing device 100 or the communication unit 110 and the processor 120 interworking. technicians will be able to understand.
  • the computing device 100 may include a target rating information receiving module 710 as a component thereof.
  • the target rating information receiving module 710 may receive target rating information from an input device included in the computing device and an interlocked user terminal.
  • the grade referred to in this specification may mean a numerical grade for a medical image according to any condition.
  • the fundus image may be classified into a grade 0 corresponding to a normal state to a grade 4 having a maximum degree of progression according to the degree of progression of diabetic retinopathy.
  • the bone image may be classified according to bone age as described above, and additionally, the grade of the individual bone image may be determined according to the malignancy of the lesion present in the bone image.
  • the grade referred to in this specification may refer to quantified information determined for an individual medical image based on a predetermined condition (eg, age, malignancy of a lesion, degree of progression of a predetermined disease, etc.) Those of ordinary skill in the art will understand.
  • the target grade information may refer to information on a grade of a medical image to be newly created through an existing medical image.
  • the feature vector acquisition module 720 may acquire a feature vector for generating a medical image corresponding to the target grade information.
  • the feature vector acquisition module 720 may include a pre-trained first 'reading model to extract a feature vector from an input medical image and read a grade of the medical image based thereon.
  • the first 'reading model may be implemented based on the same kind of artificial neural network as the feature vector acquisition module 720 of FIG. 2 above.
  • the first 'reading model included in the feature vector acquisition module 720 extracts a feature vector for the input medical image based on the training medical image labeled with the corresponding class, and the extracted feature vector It may be learned in advance to read a grade corresponding to the input medical image based on the .
  • the first 'reading model may be additionally learned based on the synthesized training medical image generated based on the synthesis between the existing training medical images. For example, if the existing learning medical image includes fundus images corresponding to grades 1 and 3 of the progression of diabetic retinopathy, the first reading model corresponds to the existing grades 1 and 3 In addition to the fundus image to be used, learning may be performed based on the fundus image corresponding to the second grade generated based on the synthesis of the fundus images corresponding to the first and third grades. The generation of the synthetic learning medical image using the existing learning medical image may be performed based on the operations of the synthesizing module 730 and the medical image generating and transmitting module 740 , which will be described below.
  • the method of generating the synthetic learning medical image is not limited to the example shown, and more various grades of the synthetic learning medical image may be generated based on the existing medical learning image.
  • synthetic learning fundus images corresponding to grades 1.6, 2.2, 2.5, and the like may be generated through learning fundus images corresponding to grades 1 and 3.
  • the feature vector acquisition module 720 may acquire a feature vector for the input medical image through an operation of acquiring the output of the intermediate layer of the first 'reading model as a feature vector.
  • the feature vector acquisition module 720 acquires a first candidate medical image and a second candidate medical image for generating a medical image corresponding to the target class, and uses the first 'reading model to obtain the first candidate medical image and the second candidate. A first' feature vector and a second' feature vector corresponding to each of the medical images may be extracted.
  • the feature vector acquisition module 720 when target grade information is input as grade 2, which is the degree of progression of diabetic retinopathy with respect to the fundus image, the feature vector acquisition module 720 generates a first candidate medical image of grade 0 corresponding to the normal state ( 1 candidate fundus image) and a grade 3 second medical image (second candidate fundus image) in which diabetic retinopathy has progressed to a certain extent, and a first 'feature vector and a second' feature corresponding to each candidate medical image vector can be extracted.
  • the method of acquiring the candidate fundus image is not limited to the presented exemplary method, and the first candidate fundus image of the first grade and the second candidate fundus image of the fourth grade are used to generate a fundus image corresponding to the target grade of the second grade. It will be understood by those skilled in the art that it can be obtained.
  • the feature vector acquisition module 720 may acquire a first' feature vector and a second' feature vector corresponding to target class information from among the databased feature vectors.
  • a feature vector corresponding to each grade may be databased, and when the target grade is grade 2, the feature vector obtaining module 720 may provide a feature vector corresponding to grade 0 and a feature corresponding to grade 3 The vector can be obtained from the database.
  • the synthesis module 730 may synthesize the first' feature vector and the second' feature vector obtained through the feature vector acquisition module 720 to generate a third' feature vector corresponding to the target class information. Specifically, the synthesis module 730 may preferentially determine whether to synthesize the first' feature vector and the second' feature vector based on a predetermined condition. For example, the synthesis module 730 may determine whether to synthesize the first' feature vector and the second' feature vector based on Equation (4).
  • L 1' may represent a first label (grade) corresponding to the first' feature vector
  • L 2' may represent a second' label (grade) corresponding to the second' feature vector
  • m may represent a preset threshold.
  • m may be a threshold value set in advance for the accuracy of the synthesis result.
  • a feature vector corresponding to grade 0 corresponding to a fundus image in which diabetic retinopathy does not occur and a feature vector corresponding to grade 4 diabetic retinopathy
  • a result with higher accuracy may be obtained by using a feature vector corresponding to grade 1 diabetic retinopathy and a feature vector corresponding to grade 3 diabetic retinopathy than when using .
  • the synthesis module 730 may determine whether to synthesize the feature vector according to a predetermined threshold value.
  • the synthesis module 730 may determine whether to proceed with synthesis according to a predetermined probability.
  • the synthesis module 730 may perform synthesis of the feature vector and a label corresponding to the feature vector based on Equations 5 and 6 based on Equation (4).
  • F 1' , F 2' and F 3' are the first, second, and third feature vectors, respectively
  • L 1' , L 2' and L 3' denote a first label corresponding to the first feature vector, a second label corresponding to the second feature vector, and a third label corresponding to the third feature vector, respectively.
  • may be any variable indicating the degree of synthesis, for example, ⁇ may be a variable following a beta distribution.
  • a 3' feature vector and a 3' label (grade) corresponding thereto may be determined.
  • the method for performing the synthesis by the synthesis module 730 may include any data synthesis method including a method using a linear interpolation or a fully connected layer.
  • the medical image generating and transmitting module 740 may generate a medical image corresponding to the target class based on the 3' feature vector generated by the synthesizing module 730 . Specifically, the medical image generation and transmission module 740 uses the pre-trained second 'reading model to generate the target medical image corresponding to the characteristic information by inputting characteristic information corresponding to the target medical image to be generated. Thus, a target medical image may be generated.
  • the second 'reading model may include an artificial neural network model such as an autoencoder or a GAN (eg, CycleGAN, BiBigGAN), and based on this, a medical image corresponding to a target grade may be generated.
  • the medical image generation and transmission module 740 inputs a 3' feature vector (eg, a feature vector generated based on the synthesis of a feature vector obtained from a candidate medical image) into the second' read model to receive a third ' A medical image corresponding to the feature vector (a medical image corresponding to the target class) may be generated.
  • a 3' feature vector eg, a feature vector generated based on the synthesis of a feature vector obtained from a candidate medical image
  • receive a third ' A medical image corresponding to the feature vector (a medical image corresponding to the target class) may be generated.
  • the artificial neural network included in the second 'reading model may be pre-trained to generate a target medical image corresponding to the feature vector by inputting an initial medical image along with a feature vector corresponding to the target medical image to be generated.
  • a target medical image may be provided, but the present invention is not limited thereto, and an arbitrary medical image may be provided.
  • the medical image generation and transmission module 740 uses the second 'reading model learned by using the initial medical image as an additional input to generate a third' feature vector and a first candidate that is a basis for generating the third' feature vector.
  • a target medical image corresponding to the 3' feature vector may be generated from the medical image or the second candidate medical image.
  • the second 'reading model trained to generate a target medical image by inputting an initial medical image along with the 3' feature vector may generate a target medical image of improved quality compared to the case where only the feature vector is used.
  • the medical image generation and transmission module 740 is not limited to the presented example, and can be implemented through any method of generating a target medical image based on the synthesis of feature vectors extracted from a candidate medical image. can be understood
  • the medical image generation and transmission module 740 may store the generated medical image in a database or provide it to an external entity.
  • the medical image generation and transmission module 740 may provide the generated medical image to an external entity using a predetermined display device or the like or through a communication unit provided therein.
  • the meaning of the external entity may be the same as described above with reference to FIG. 2 .
  • FIG. 7 Although the components shown in FIG. 7 are exemplified as being realized in one computing device for convenience of description, it will be understood that a plurality of computing devices 100 performing the method of the present invention may be configured to interwork with each other.
  • FIG. 8 is a flowchart illustrating a method of generating a medical image according to an exemplary embodiment.
  • the computing device may receive target rating information in operation 810 .
  • the target grade information corresponds to information on a target grade of a medical image to be newly generated, and may be received from an external entity through a communication unit or may be input through an provided input device.
  • the computing device may acquire a first candidate medical image and a second candidate medical image for generating a medical image corresponding to the target class information.
  • the computing device is configured to provide an interval between the grade of the first candidate medical image and the grade of the second candidate medical image. It can be determined whether the interval is smaller than a predetermined interval.
  • the computing device may perform a subsequent operation 830 .
  • the computing device may generate the first candidate medical image and the second medical image so that the interval is less than or equal to the predetermined interval.
  • a process of re-acquiring the second candidate medical image may be performed. This is to improve accuracy of a calculation result by using a medical image corresponding to a grade close to the target grade in the process of generating a medical image corresponding to the target grade.
  • the computing device may obtain a first' feature vector and a second' feature vector for each of the first candidate medical image and the second candidate medical image, based on the pre-trained first 'reading model in step 830 .
  • the first reading model may be an artificial neural network trained to extract a feature vector from an input medical image and read a grade of the input medical image based thereon.
  • the first 'reading model is based on the training medical image processed as described above (including the synthesized training image generated based on the synthesis of the existing training image along with the existing training image), the characteristics of the input medical image It may be learned to extract a vector and calculate a grade of an input medical image based on the extracted feature vector.
  • a person skilled in the art can easily understand how the first 'reading model is trained to read the grade of the medical image by using the medical image labeled with the corresponding grade as training data.
  • the first 'reading model used in the present invention may be trained using a processed learning medical image to improve the accuracy of reading.
  • the processed learning medical image may further include a third learning medical image generated by synthesizing the first medical learning image and the second learning medical image obtained in the learning medical image together with the existing medical learning image.
  • the third medical training image may correspond to a third grade corresponding to a grade between a first grade corresponding to the first training medical image and a second grade corresponding to the second training medical image.
  • the processed learning data may include all elements corresponding to individual grades compared to the existing learning data. The present invention may provide a more accurate reading result through the third reading model learned using the processed training data.
  • the computing device may generate a third' feature vector corresponding to the target class by synthesizing the first' feature vector and the second' feature vector in operation 840 .
  • the computing device may generate a third' feature vector by interpolating the first' feature vector and the second' feature vector, and the method of generating the third' feature vector is based on Equations 4 through Equations 4 to Equation 4 described above. As described in Equation 6.
  • the computing device may generate a target medical image corresponding to the target class based on the 3' feature vector.
  • the hardware may include general purpose computers and/or dedicated computing devices or specific computing devices or special features or components of specific computing devices.
  • the processes may be realized by one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable devices, having internal and/or external memory. Additionally, or alternatively, the processes may be configured to process an application specific integrated circuit (ASIC), programmable gate array, programmable array logic (PAL) or electronic signals. It may be implemented with any other device or combination of devices.
  • ASIC application specific integrated circuit
  • PAL programmable array logic
  • the objects of the technical solution of the present invention or parts contributing to the prior arts may be implemented in the form of program instructions that can be executed through various computer components and recorded in a machine-readable recording medium.
  • the machine-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
  • the program instructions recorded on the machine-readable recording medium may be specially designed and configured for the present invention, or may be known and available to those skilled in the art of computer software.
  • Examples of the machine-readable recording medium include a hard disk, a magnetic medium such as a floppy disk and a magnetic tape, an optical recording medium such as CD-ROM, DVD, Blu-ray, and a magneto-optical medium such as a floppy disk (magneto-optical media), and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include any one of the devices described above, as well as heterogeneous combinations of processors, processor architectures or combinations of different hardware and software, or stored and compiled or interpreted for execution on a machine capable of executing any other program instructions.
  • can be created using a structured programming language such as C, an object-oriented programming language such as C++, or a high-level or low-level programming language This includes not only bytecode, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • the methods and combinations of methods when the method and combinations thereof described above are performed by one or more computing devices, the methods and combinations of methods may be implemented as executable code for performing respective steps.
  • the method may be implemented as systems that perform the steps, the methods may be distributed in various ways across devices or all functions may be integrated into one dedicated, standalone device or other hardware.
  • the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such sequential combinations and combinations are intended to fall within the scope of this disclosure.
  • the hardware device may be configured to operate as one or more software modules to perform processing in accordance with the present invention, and vice versa.
  • the hardware device may include a processor such as an MPU, CPU, GPU, TPU coupled with a memory such as ROM/RAM for storing program instructions and configured to execute instructions stored in the memory, and an external device and signal It may include a communication unit that can send and receive.
  • the hardware device may include a keyboard, a mouse, and other external input devices for receiving commands written by developers.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Physics & Mathematics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed is a medical image generation method performed by a computing device. A medical image generation method according to an embodiment may comprise the steps of: acquiring a first feature vector and a second feature vector from a first candidate medical image and a second candidate medical image, respectively; synthesizing the acquired first and second feature vectors to generate a third feature vector corresponding to a target rating which is positioned between a first rating corresponding to the first candidate medical image and a second rating corresponding to the second candidate medical image; and generating a target medical image corresponding to the target rating on the basis of the third feature vector.

Description

의료 영상 생성 방법 및 이를 이용한 장치Medical image generation method and device using same
본 발명은 골 영상을 포함하는 의료 영상 생성 방법 및 이를 이용한 장치에 관한 것이다.The present invention relates to a method for generating a medical image including a bone image and an apparatus using the same.
입력 의료 영상의 특징 벡터를 추출하고, 이에 기반하여 특징 벡터에 대응하는 결과물(예를 들어, 연령)을 산출하는 인공 신경망의 경우, 준비된 학습 데이터에 기반하여 학습될 수 있다. 이 경우, 일부 학습 데이터가 누락되는 연령에 대해서는 성능이 떨어질 가능성이 높다.An artificial neural network that extracts a feature vector of an input medical image and calculates a result (eg, age) corresponding to the feature vector based thereon may be trained based on prepared learning data. In this case, there is a high possibility that the performance will decrease for the age at which some training data is omitted.
특히, 시계열적으로 연속된 의료 영상을 학습 데이터로 이용하는 경우, 대표 의료 영상에 존재하지 않는 구간, 즉 중간 단계에 해당하는 구간 (예를 들어, 골 영상의 경우, 10.4세, 21.3세 등)에 대해서는 학습 데이터가 누락되어 있기 때문에 인공 신경망의 판독 정확도가 감소할 수 있다.In particular, when using time-series continuous medical images as training data, in the section that does not exist in the representative medical image, that is, the section corresponding to the intermediate stage (eg, in the case of bone images, 10.4 years old, 21.3 years old, etc.) For example, the reading accuracy of the artificial neural network may decrease because the training data is missing.
본 발명은 미리 저장된 골 영상을 이용하여, 데이터베이스화되어있지 않은 연령에 대응되는 골 영상을 생성함으로써, 골 영상에 대한 판독을 보조할 수 있는 수단을 제공하고자 한다.An object of the present invention is to provide a means for assisting in reading a bone image by generating a bone image corresponding to an age that is not stored in a database using a previously stored bone image.
또한, 골 영상으로부터 특징 벡터를 추출하는 판독 모델의 판독 정확성을 향상시킬 수 있는 판독 모델 학습 방법을 제공하고자 한다.Another object of the present invention is to provide a reading model learning method capable of improving the reading accuracy of a reading model that extracts a feature vector from a bone image.
또한, 본 발명에 따른 골 영상 생성 방법의 합성 방식은 골 영상에 한정되어 적용되는 것은 아니고, 각각의 의료 영상에 소정의 등급(예를 들어, 연령, 질환 진행 정도 등 임의의 수치화된 요소)이 부여된 임의의 의료 영상으로 확장될 수 있다. 이를 통해 본 발명은 기 존재하는 의료 영상으로부터 존재하지 않는 등급에 해당하는 의료 영상을 생성하는 방식에 적용될 수 있다.In addition, the synthesis method of the method for generating a bone image according to the present invention is not limitedly applied to a bone image, and each medical image has a predetermined grade (eg, an arbitrary numerical factor such as age, disease progression, etc.). It can be extended to any given medical image. Through this, the present invention can be applied to a method of generating a medical image corresponding to a non-existent grade from an existing medical image.
상기한 바와 같은 본 발명의 목적을 달성하고, 후술하는 본 발명의 특징적인 효과를 실현하기 위한 본 발명의 특징적인 구성은 하기와 같다.The characteristic configuration of the present invention for achieving the object of the present invention as described above and for realizing the characteristic effects of the present invention to be described later is as follows.
본 발명의 일 태양(aspect)에 따르면, 컴퓨팅 장치에 의해 수행되는 골 영상 생성 방법은 (a) 타깃 골 연령을 입력받는 단계; (b) 타깃 골 연령에 대응되는 골 영상을 생성하기 위한 제1 후보 골 영상 및 제2 후보 골 영상을 획득하는 단계; (c) 미리 학습된 판독 모델에 기초하여, 상기 제1 후보 골 영상 및 제2 후보 골 영상 각각에 대한 제1 특징 벡터 및 제2 특징 벡터를 추출하는 단계; (d) 상기 제1 특징 벡터 및 상기 제2 특징 벡터를 합성하여 상기 타깃 골 연령에 대응되는 제3 특징 벡터를 생성하는 단계; 및 (e) 상기 제3 특징 벡터에 기초하여, 상기 타깃 골 연령에 대응되는 타깃 골 영상을 생성하는 단계를 포함할 수 있다.According to an aspect of the present invention, a method for generating a bone image performed by a computing device includes the steps of: (a) receiving a target bone age; (b) acquiring a first candidate bone image and a second candidate bone image for generating a bone image corresponding to the target bone age; (c) extracting a first feature vector and a second feature vector for each of the first candidate bone image and the second candidate bone image based on the pre-trained read model; (d) generating a third feature vector corresponding to the target bone age by synthesizing the first feature vector and the second feature vector; and (e) generating a target bone image corresponding to the target bone age based on the third feature vector.
일 실시예에 따르면, 상기 미리 학습된 판독 모델은 가공된 학습 골 영상에 기초하여, 입력 골 영상의 특징 벡터를 출력하도록 미리 학습되고, 상기 가공된 학습 골 영상은 학습 골 영상에 획득한 제1 학습 골 영상 및 제2 학습 골 영상을 합성하여, 상기 제1 학습 골 영상에 대응되는 제1 골 연령 및 상기 제2 학습 골 영상에 대응되는 제2 골 연령 사이의 연령에 대응되는 제3 골 연령에 대응되는 제3 학습 골 영상을 생성함으로써 생성될 수 있다.According to an embodiment, the pre-trained reading model is pre-trained to output a feature vector of an input goal image based on a processed learning goal image, and the processed learning goal image is a first obtained learning goal image. By synthesizing the learning goal image and the second learning goal image, the third bone age corresponding to the age between the first bone age corresponding to the first learning goal image and the second bone age corresponding to the second learning goal image It may be generated by generating a third learning goal image corresponding to .
일 태양에 따른 골 영상을 생성하는 컴퓨팅 장치는 타깃 골 연령에 대한 입력을 수신하는 통신부; 및 상기 타깃 골 연령에 대응되는 타깃 골 영상을 생성하는 프로세서를 포함하고, 상기 프로세서는 미리 학습된 판독 모델에 기초하여, 상기 제1 후보 골 영상 및 제2 후보 골 영상 각각에 대한 제1 특징 벡터 및 제2 특징 벡터를 획득하고, 상기 제1 특징 벡터 및 상기 제2 특징 벡터를 합성하여 상기 타깃 골 연령에 대응되는 제3 특징 벡터를 생성하고, 제3 특징 벡터에 기초하여, 상기 타깃 골 연령에 대응되는 타깃 골 영상을 생성할 수 있다. A computing device for generating a bone image according to an aspect includes: a communication unit for receiving an input for a target bone age; and a processor generating a target bone image corresponding to the target bone age, wherein the processor is configured to generate a first feature vector for each of the first candidate bone image and the second candidate bone image based on a pre-trained reading model. and obtaining a second feature vector, synthesizing the first feature vector and the second feature vector to generate a third feature vector corresponding to the target bone age, and based on the third feature vector, the target bone age It is possible to generate a target bone image corresponding to .
본 발명의 일 실시 예에 의하면, 데이터베이스화 되어 있지 않는 연령에 대응되는 골 영상을 추가적으로 제공함으로써, 골 연령 판독의 정확도를 향상시킬 수 있다.According to an embodiment of the present invention, it is possible to improve the accuracy of bone age reading by additionally providing a bone image corresponding to the age that is not in the database.
또한, 가공된 학습 데이터를 이용하는 개선된 학습 방법에 기반하여, 골 영상에서 특징 벡터를 추출하도록 학습되는 판독 모델의 성능을 향상시킬 수 있는 현저한 효과를 제공할 수 있다.In addition, based on the improved learning method using the processed learning data, it is possible to provide a remarkable effect that can improve the performance of the reading model trained to extract the feature vector from the bone image.
본 발명에 따르면 궁극적으로 의료진의 진단의 정확도를 향상시키고, 개별 연령에 대응하는 참조 골 영상을 제공함으로써, 의료 현장에서의 워크플로(workflow)를 혁신할 수 있게 되는 잠재적 효과가 있다.According to the present invention, there is a potential effect of ultimately improving the diagnosis accuracy of medical personnel and providing a reference bone image corresponding to an individual age, thereby innovating a workflow in a medical field.
그리고 본 발명은, 종래에 병원에서 이용하고 있는 의료 영상이 그대로 활용될 수 있는바, 본 발명의 방법이 특정 형식의 영상이나 플랫폼에 종속되지 않음은 물론이다.And, in the present invention, the conventional medical images used in hospitals can be used as they are, so it goes without saying that the method of the present invention is not dependent on a specific type of image or platform.
또한 본 발명은 기 존재하는 의료 영상으로부터 추출한 특징 벡터의 합성을 통해, 소정의 등급에 해당하는 의료 영상(기존에 존재하지 않는 등급의 의료 영상)을 생성할 수 있다.Also, according to the present invention, a medical image corresponding to a predetermined grade (a medical image of a non-existing grade) may be generated by synthesizing a feature vector extracted from an existing medical image.
본 발명의 실시 예의 설명에 이용되기 위하여 첨부된 아래 도면들은 본 발명의 실시 예들 중 단지 일부일 뿐이며, 본 발명이 속한 기술분야에서 통상의 지식을 가진 사람(이하 "통상의 기술자"라고 함)에게 있어서는 발명에 이르는 노력 없이 이 도면들에 기초하여 다른 도면들이 얻어질 수 있다.The accompanying drawings for use in the description of the embodiments of the present invention are only a part of the embodiments of the present invention, and for those of ordinary skill in the art to which the present invention pertains (hereinafter referred to as "a person skilled in the art"), Other drawings may be obtained based on these drawings without an effort leading to the invention.
도 1은 본 발명에 따른 골 영상을 포함하는 의료 영상 생성 방법을 수행하는 컴퓨팅 장치의 예시적 구성을 개략적으로 도시한 개념도이다.1 is a conceptual diagram schematically illustrating an exemplary configuration of a computing device for performing a method for generating a medical image including a bone image according to the present invention.
도 2는 본 발명에 따른 골 영상 생성 방법을 수행하는 컴퓨팅 장치의 하드웨어 또는 소프트웨어 구성요소를 도시한 예시적 블록도이다.2 is an exemplary block diagram illustrating hardware or software components of a computing device performing a method for generating a bone image according to the present invention.
도 3은 일 실시 예에 따른 골 영상 생성 방법을 설명하기 위한 흐름도이다.3 is a flowchart illustrating a method for generating a bone image according to an embodiment.
도 4는 일 실시 예에 따른 골 영상 생성 방법이 수행되는 일례를 도시하는 도면이다.4 is a diagram illustrating an example in which a method for generating a bone image according to an embodiment is performed.
도 5는 연령에 따른 특징 벡터의 분포를 도시하는 도면이다.5 is a diagram illustrating a distribution of feature vectors according to age.
도 6은 일 실시예에 따른 골 영상 생성 방법이 적용된 인터페이스의 일례를 도시하는 도면이다.6 is a diagram illustrating an example of an interface to which a method for generating a bone image according to an embodiment is applied.
도 7은 본 발명에 따른 의료 영상 생성 방법을 수행하는 컴퓨팅 장치의 하드웨어 또는 소프트웨어 구성요소를 도시한 예시적 블록도이다.7 is an exemplary block diagram illustrating hardware or software components of a computing device performing a method for generating a medical image according to the present invention.
도 8은 일 실시 예에 따른 의료 영상 생성 방법을 설명하기 위한 흐름도이다.8 is a flowchart illustrating a method of generating a medical image according to an exemplary embodiment.
후술하는 본 발명에 대한 상세한 설명은, 본 발명의 목적들, 기술적 해법들 및 장점들을 분명하게 하기 위하여 본 발명이 실시될 수 있는 특정 실시 예를 예시로서 도시하는 첨부 도면을 참조한다. 이들 실시 예는 통상의 기술자가 본 발명을 실시할 수 있기에 충분하도록 상세히 설명된다.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The following detailed description of the present invention refers to the accompanying drawings, which show by way of illustration a specific embodiment in which the present invention may be practiced in order to clarify the objects, technical solutions and advantages of the present invention. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present invention.
본 발명의 상세한 설명 및 청구항들에 걸쳐 이용된 "영상" 또는 "영상 데이터"라는 용어는 이산적 영상 요소들(예컨대, 2차원 영상에 있어서는 픽셀, 3차원 영상에 있어서는 복셀)로 구성된 다차원 데이터를 지칭한다. 예를 들어 "영상"은 X선 영상, (콘-빔형; cone-beam) 전산화 단층 촬영(computed tomography), MRI(magnetic resonance imaging), 초음파 또는 본 발명의 기술분야에서 공지된 임의의 다른 의료 영상 시스템의 의하여 수집된 피사체, 즉 피검체(subject)의 의료 영상일 수 있다. 또한 영상은 비의료적 맥락에서 제공될 수도 있는바, 예를 들어 원격 감지 시스템(remote sensing system), 전자현미경(electron microscopy) 등등이 있을 수 있다.The term "image" or "image data" as used throughout the present description and claims refers to multidimensional data composed of discrete image elements (e.g., pixels in a two-dimensional image, voxels in a three-dimensional image). refers to For example, "imaging" means X-ray imaging, (cone-beam) computed tomography, magnetic resonance imaging (MRI), ultrasound or any other medical imaging known in the art. It may be a medical image of a subject, ie, a subject, collected by the system. Also, the image may be provided in a non-medical context, for example, a remote sensing system, an electron microscopy, and the like.
본 발명의 상세한 설명 및 청구항들에 걸쳐, '영상'은 (예컨대, 비디오 화면에 표시된) 눈으로 볼 수 있는 영상 또는 (예컨대, CT, MRI 검출기 등의 픽셀 출력에 대응되는 파일과 같은) 영상의 디지털 표현물을 지칭하는 용어이다.Throughout the description and claims of the present invention, 'image' refers to a viewable image (eg, displayed on a video screen) or an image (eg, a file corresponding to a pixel output of a CT, MRI detector, etc.) A term that refers to digital representations.
본 발명의 상세한 설명 및 청구항들에 걸쳐 'DICOM(Digital Imaging and Communications in Medicine; 의료용 디지털 영상 및 통신)' 표준은 의료용 기기에서 디지털 영상 표현과 통신에 이용되는 여러 가지 표준을 총칭하는 용어인바, DICOM 표준은 미국 방사선 의학회(ACR)와 미국 전기 공업회(NEMA)에서 구성한 연합 위원회에서 발표한다.Throughout the detailed description and claims of the present invention, the 'DICOM (Digital Imaging and Communications in Medicine)' standard is a generic term for various standards used for digital image representation and communication in medical devices, DICOM Standards are published by a joint committee formed by the American Society of Radiological Medicine (ACR) and the National Electrical Engineers Association (NEMA).
또한, 본 발명의 상세한 설명 및 청구항들에 걸쳐 '의료영상 저장 전송 시스템(PACS; picture archiving and communication system)'은 DICOM 표준에 맞게 저장, 가공, 전송하는 시스템을 지칭하는 용어이며, X선, CT, MRI와 같은 디지털 의료영상 장비를 이용하여 획득된 의료영상 이미지는 DICOM 형식으로 저장되고 네트워크를 통하여 병원 내외의 단말로 전송이 가능하며, 이에는 판독 결과 및 진료 기록이 추가될 수 있다.In addition, throughout the detailed description and claims of the present invention, 'picture archiving and communication system (PACS)' is a term that refers to a system that stores, processes, and transmits in accordance with the DICOM standard, X-ray, CT , medical imaging images acquired using digital medical imaging equipment such as MRI are stored in DICOM format and can be transmitted to terminals inside and outside the hospital through a network, to which reading results and medical records can be added.
그리고 본 발명의 상세한 설명 및 청구항들에 걸쳐 '학습' 혹은 '러닝'은 절차에 따른 컴퓨팅(computing)을 통하여 기계 학습(machine learning)을 수행함을 일컫는 용어인바, 인간의 교육 활동과 같은 정신적 작용을 지칭하도록 의도된 것이 아님을 통상의 기술자는 이해할 수 있을 것이다.And throughout the detailed description and claims of the present invention, 'learning' or 'learning' is a term referring to performing machine learning through computing according to a procedure, and a mental action such as human educational activity. It will be understood by those of ordinary skill in the art that this is not intended to be a reference.
그리고 본 발명의 상세한 설명 및 청구항들에 걸쳐, '포함하다'라는 단어 및 그 변형은 다른 기술적 특징들, 부가물들, 구성요소들 또는 단계들을 제외하는 것으로 의도된 것이 아니다. 또한, '하나' 또는 '한'은 하나 이상의 의미로 쓰인 것이며, '또 다른'은 적어도 두 번째 이상으로 한정된다.And throughout this description and claims, the word 'comprise' and variations thereof are not intended to exclude other technical features, additions, components or steps. In addition, 'one' or 'an' is used to mean more than one, and 'another' is limited to at least a second or more.
통상의 기술자에게 본 발명의 다른 목적들, 장점들 및 특성들이 일부는 본 설명서로부터, 그리고 일부는 본 발명의 실시로부터 드러날 것이다. 아래의 예시 및 도면은 실례로서 제공되며, 본 발명을 한정하는 것으로 의도된 것이 아니다. 따라서, 특정 구조나 기능에 관하여 본 명세서에 개시된 상세 사항들은 한정하는 의미로 해석되어서는 아니되고, 단지 통상의 기술자가 실질적으로 적합한 임의의 상세 구조들로써 본 발명을 다양하게 실시하도록 지침을 제공하는 대표적인 기초 자료로 해석되어야 할 것이다.Other objects, advantages and characteristics of the present invention will appear to a person skilled in the art, in part from this description, and in part from practice of the present invention. The following illustrations and drawings are provided by way of illustration and are not intended to limit the invention. Accordingly, the details disclosed herein with respect to a specific structure or function should not be construed in a limiting sense, but merely represent a representative providing guidance for those skilled in the art to variously practice the present invention with virtually any suitable detailed structure. should be interpreted as basic data.
더욱이 본 발명은 본 명세서에 표시된 실시 예들의 모든 가능한 조합들을 망라한다. 본 발명의 다양한 실시 예는 서로 다르지만 상호 배타적일 필요는 없음이 이해되어야 한다. 예를 들어, 여기에 기재되어 있는 특정 형상, 구조 및 특성은 일 실시 예에 관련하여 본 발명의 사상 및 범위를 벗어나지 않으면서 다른 실시 예로 구현될 수 있다. 또한, 각각의 개시된 실시 예 내의 개별 구성요소의 위치 또는 배치는 본 발명의 사상 및 범위를 벗어나지 않으면서 변경될 수 있음이 이해되어야 한다. 따라서, 후술하는 상세한 설명은 한정적인 의미로서 취하려는 것이 아니며, 본 발명의 범위는, 적절하게 설명된다면, 그 청구항들이 주장하는 것과 균등한 모든 범위와 더불어 첨부된 청구항에 의해서만 한정된다. 도면에서 유사한 참조부호는 여러 측면에 걸쳐서 동일하거나 유사한 기능을 지칭한다.Moreover, the invention encompasses all possible combinations of the embodiments indicated herein. It should be understood that various embodiments of the present invention are different but need not be mutually exclusive. For example, certain shapes, structures, and characteristics described herein may be implemented in other embodiments without departing from the spirit and scope of the invention in relation to one embodiment. In addition, it should be understood that the position or arrangement of individual components in each disclosed embodiment may be changed without departing from the spirit and scope of the present invention. Accordingly, the detailed description set forth below is not intended to be taken in a limiting sense, and the scope of the invention, if properly described, is limited only by the appended claims, along with all scope equivalents to those claimed. Like reference numerals in the drawings refer to the same or similar functions throughout the various aspects.
본 명세서에서 달리 표시되거나 분명히 문맥에 모순되지 않는 한, 단수로 지칭된 항목은, 그 문맥에서 달리 요구되지 않는 한, 복수의 것을 아우른다. 또한, 본 발명을 설명함에 있어, 관련된 공지 구성 또는 기능에 대한 구체적인 설명이 본 발명의 요지를 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명은 생략한다.Unless otherwise indicated herein or otherwise clearly contradicted by context, items referred to in the singular encompass the plural unless the context requires otherwise. In addition, in describing the present invention, if it is determined that a detailed description of a related known configuration or function may obscure the gist of the present invention, the detailed description thereof will be omitted.
이하, 통상의 기술자가 본 발명을 용이하게 실시할 수 있도록 하기 위하여, 본 발명의 바람직한 실시 예들에 관하여 첨부된 도면을 참조하여 상세히 설명하기로 한다.Hereinafter, in order to enable those skilled in the art to easily practice the present invention, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명에 따른 골 영상을 포함하는 의료 영상 생성 방법을 수행하는 컴퓨팅 장치의 예시적 구성을 개략적으로 도시한 개념도이다.1 is a conceptual diagram schematically illustrating an exemplary configuration of a computing device for performing a method for generating a medical image including a bone image according to the present invention.
도 1을 참조하면, 본 발명의 일 실시 예에 따른 컴퓨팅 장치(100)는, 통신부(110) 및 프로세서(120)를 포함하며, 상기 통신부(110)를 통하여 외부 컴퓨팅 장치(미도시)와 직간접적으로 통신할 수 있다.Referring to FIG. 1 , a computing device 100 according to an embodiment of the present invention includes a communication unit 110 and a processor 120 , and is directly or indirectly connected to an external computing device (not shown) through the communication unit 110 . can communicate effectively.
구체적으로, 상기 컴퓨팅 장치(100)는, 전형적인 컴퓨터 하드웨어(예컨대, 컴퓨터 프로세서, 메모리, 스토리지, 입력 장치 및 출력 장치, 기타 기존의 컴퓨팅 장치의 구성요소들을 포함할 수 있는 장치; 라우터, 스위치 등과 같은 전자 통신 장치; 네트워크 부착 스토리지(NAS; network-attached storage) 및 스토리지 영역 네트워크(SAN; storage area network)와 같은 전자 정보 스토리지 시스템)와 컴퓨터 소프트웨어(즉, 컴퓨팅 장치로 하여금 특정의 방식으로 기능하게 하는 명령어들)의 조합을 이용하여 원하는 시스템 성능을 달성하는 것일 수 있다.Specifically, the computing device 100 includes typical computer hardware (eg, a computer processor, memory, storage, input device and output device, device that may include other components of a conventional computing device; router, switch, etc.) Electronic communication devices; electronic information storage systems, such as network-attached storage (NAS) and storage area networks (SANs)) and computer software (i.e., that enable the computing device to function in a particular way). instructions) to achieve the desired system performance.
이와 같은 컴퓨팅 장치의 통신부(110)는 연동되는 타 컴퓨팅 장치와 요청과 응답을 송수신할 수 있는바, 일 예시로서 그러한 요청과 응답은 동일한 TCP(transmission control protocol) 세션(session)에 의하여 이루어질 수 있지만, 이에 한정되지는 않는바, 예컨대 UDP(user datagram protocol) 데이터그램(datagram)으로서 송수신될 수도 있을 것이다. 덧붙여, 넓은 의미에서 상기 통신부(110)는 명령어 또는 지시 등을 전달받기 위한 키보드, 마우스, 기타 외부 입력장치, 프린터, 디스플레이, 기타 외부 출력장치를 포함할 수 있다.The communication unit 110 of such a computing device may transmit and receive a request and a response to and from another computing device that is interlocked. As an example, such a request and a response may be made by the same transmission control protocol (TCP) session. , but is not limited thereto, and may be transmitted and received as, for example, a user datagram protocol (UDP) datagram. In addition, in a broad sense, the communication unit 110 may include a keyboard, a mouse, other external input devices, printers, displays, and other external output devices for receiving commands or instructions.
또한, 컴퓨팅 장치의 프로세서(120)는 MPU(micro processing unit), CPU(central processing unit), GPU(graphics processing unit) 또는 TPU(tensor processing unit), 캐시 메모리(cache memory), 데이터 버스(data bus) 등의 하드웨어 구성을 포함할 수 있다. 또한, 운영체제, 특정 목적을 수행하는 애플리케이션의 소프트웨어 구성을 더 포함할 수도 있다.In addition, the processor 120 of the computing device may include a micro processing unit (MPU), a central processing unit (CPU), a graphics processing unit (GPU) or a tensor processing unit (TPU), a cache memory, and a data bus. ) may include a hardware configuration such as In addition, it may further include an operating system, a software configuration of an application for performing a specific purpose.
하기의 설명에서는 본원 발명의 영상은 골 영상을 대상으로 하고, 골 영상은 일례로 골 X선 영상을 대상으로 하고 있으나, 본원 발명의 권리범위는 이에 한정되지 않고, 일반적인 형태의 골 영상에 모두 적용이 가능하고, 대상이 되는 영상의 골 연령을 판독하는 과정에서 적용될 수 있다는 점은 통상의 기술자가 용이하게 이해할 것이다.In the following description, the image of the present invention is a bone image, and the bone image is a bone X-ray image as an example. However, the scope of the present invention is not limited thereto, and it is applied to all general types of bone images. A person skilled in the art will readily understand that this is possible and can be applied in the process of reading the bone age of a target image.
또한, 본원 발명은 골 영상 이외에도 임의의 등급화가 진행된 의료 영상을 대상으로 확장될 수 있으며, 등급화가 진행된 의료 영상의 일례는 영상에 포함된 안구에 대한 당뇨성망막병증의 진행 정도가 등급화되어 레이블링된 안저 영상을 포함할 수 있으나, 본원 발명의 권리범위는 이에 한정되지 않고, 임의의 조건에 따라 등급화되어 있는 의료 영상에 적용가능하다는 것은 통상의 기술자가 이해할 것이다.In addition, the present invention can be extended to medical images that have been graded in addition to bone images, and in an example of a graded medical image, the degree of progression of diabetic retinopathy for the eye included in the image is graded and labeled A person skilled in the art will understand that the present invention may include a fundus image that has been developed, but the scope of the present invention is not limited thereto, and is applicable to a medical image graded according to any condition.
도 2는 본 발명에 따른 골 영상 생성 방법을 수행하는 컴퓨팅 장치의 하드웨어 또는 소프트웨어 구성요소를 도시한 예시적 블록도이다.2 is an exemplary block diagram illustrating hardware or software components of a computing device performing a method for generating a bone image according to the present invention.
도 2에 도시된 개별 모듈들은, 예컨대, 컴퓨팅 장치(100)에 포함된 통신부(110)나 프로세서(120), 또는 상기 통신부(110) 및 프로세서(120)의 연동에 의하여 구현될 수 있음은 통상의 기술자가 이해할 수 있을 것이다.The individual modules shown in FIG. 2 may be implemented by, for example, the communication unit 110 or the processor 120 included in the computing device 100 or the communication unit 110 and the processor 120 interworking. technicians will be able to understand.
도 2를 참조하여 본 발명에 따른 방법 및 장치의 구성을 간략히 개관하면, 컴퓨팅 장치(100)는 그 구성요소로서 타깃 골 연령 정보 수신 모듈(210)을 포함할 수 있다. 타깃 골 연령 정보 수신 모듈(210)은 컴퓨팅 장치에 포함되는 입력 장치, 연동된 사용자 단말로부터 목표 골 연령 정보를 수신할 수 있다.A brief overview of the configuration of the method and apparatus according to the present invention with reference to FIG. 2 , the computing device 100 may include the target bone age information receiving module 210 as a component thereof. The target bone age information receiving module 210 may receive target bone age information from an input device included in the computing device and an interlocked user terminal.
타깃 골 연령 정보 수신 모듈(210)을 통해 획득한 타깃 골 연령 정보에 기초하여, 특징 벡터 획득 모듈(220)은 타깃 골 연령 정보에 대응되는 골 영상을 생성하기 위한 특징 벡터를 획득할 수 있다.Based on the target bone age information acquired through the target bone age information receiving module 210 , the feature vector acquisition module 220 may acquire a feature vector for generating a bone image corresponding to the target bone age information.
일 실시예에 따르면, 특징 벡터 획득 모듈(220)은 입력 골 영상에서 특징 벡터를 추출하고, 추출된 특징 벡터에 기반하여 골 연령을 판독하도록 미리 학습된 제1 판독 모델을 포함할 수 있다. 제1 판독 모델은 예를 들어, DNN (Deep Neural Network), CNN (Convolutional Neural Network), DCNN (Deep Convolution Neural Network), RNN (Recurrent Neural Network), RBM (Restricted Boltzmann Machine), DBN (Deep Belief Network), SSD (Single Shot Detector), YOLO (You Only Look Once) 등이 이용될 수 있음은 통상의 기술자가 이해할 것이다. 본 발명에서 사용될 수 있는 인공 신경망의 종류는 제시된 예시에 한정되지 않고, 레이블링된 학습 데이터에 기반하여 골 영상에서 특징 벡터를 추출하고, 추출된 특징 벡터에 기초하여 골 연령을 판독하도록 학습될 수 있는 임의의 인공 신경망을 포함할 수 있다는 점은 통상의 기술자가 이해할 수 있을 것이다.According to an embodiment, the feature vector acquisition module 220 may include a pre-trained first reading model to extract a feature vector from an input bone image and read a bone age based on the extracted feature vector. The first read model is, for example, a Deep Neural Network (DNN), a Convolutional Neural Network (CNN), a Deep Convolution Neural Network (DCNN), a Recurrent Neural Network (RNN), a Restricted Boltzmann Machine (RBM), or a Deep Belief Network (DBN). ), a Single Shot Detector (SSD), You Only Look Once (YOLO), etc. may be used, as will be understood by those skilled in the art. The type of artificial neural network that can be used in the present invention is not limited to the presented example, and it can be learned to extract a feature vector from a bone image based on the labeled learning data, and to read the bone age based on the extracted feature vector. It will be understood by those skilled in the art that it may include any artificial neural network.
일 실시예에 따르면, 특징 벡터 획득 모듈(220)에 포함된 제1 판독 모델은, 대응되는 연령이 레이블링된 학습 골 영상에 기초하여, 입력 골 영상에 대한 특징 벡터를 추출하고, 추출한 특징 벡터에 기초하여 입력 골 영상에 대응되는 골 연령을 판독하도록 미리 학습될 수 있다.According to an embodiment, the first reading model included in the feature vector acquisition module 220 extracts a feature vector for the input bone image based on the training bone image labeled with the corresponding age, and adds the extracted feature vector to the image. It may be learned in advance to read the bone age corresponding to the input bone image based on the input bone image.
또한, 제1 판독 모델은 기 존재하는 둘 이상의 학습 골 영상을 합성하여 생성된 합성 학습 골 영상에 기초하여 추가적인 학습이 진행될 수 있다. 예를 들어, 기 존재하는 학습 골 영상에 1세 , 3세에 대응되는 골 영상이 포함된 경우, 기 존재하는 1세, 3세에 대응되는 골 영상과 더불어, 1세, 3세에 대응되는 골 영상의 합성에 기초하여 생성된 2세에 대응되는 골 영상에 기초하여 제1 판독 모델에 대한 학습이 진행될 수 있다. 기 존재하는 학습 골 영상에 기초하여 합성 학습 골 영상을 생성하는 방식은 이하 설명되는 합성 모듈(230) 및 골 영상 생성 및 전송 모듈(240)의 동작에 기초하여 수행될 수 있음은 통상의 기술자가 이해할 것이다. 또한, 합성 학습 골 영상이 생성되는 방식은 예시된 바에 한정되지 않고, 기 존재하는 학습 골 영상을 토대로 보다 다양한 연령의 합성 학습 골 영상이 생성될 수 있다. 예를 들어, 1세 및 3세에 대응되는 학습 골 영상을 통해, 1.6세, 2.2세, 2.5세 등에 대응되는 합성 학습 골 영상이 생성될 수 있으며, 이는 이하 설명되는 합성 모듈(230) 및 골 영상 생성 및 전송 모듈(240)의 동작 방식에 기초하여 수행될 수 있다.In addition, additional learning may be performed on the first reading model based on a synthesized learning bone image generated by synthesizing two or more existing learning bone images. For example, if the pre-existing learning bone image includes bone images corresponding to 1 year old and 3 years old, in addition to the existing bone images corresponding to 1 year old and 3 years old, the bone images corresponding to 1 year and 3 years old are included. Learning of the first reading model may be performed based on the bone image corresponding to the age of 2 generated based on the synthesis of the bone image. A person of ordinary skill in the art knows that a method of generating a synthetic learning goal image based on an existing learning goal image may be performed based on the operations of the synthesizing module 230 and the bone image generation and transmission module 240 to be described below. will understand In addition, the method of generating the synthetic learning bone image is not limited to the example shown, and synthetic learning bone images of various ages may be generated based on the existing learning bone image. For example, through the learning bone images corresponding to 1 year old and 3 years old, synthetic learning bone images corresponding to 1.6 years old, 2.2 years old, 2.5 years old, etc. may be generated, which are the synthesis module 230 and the bone to be described below. It may be performed based on an operation method of the image generation and transmission module 240 .
특징 벡터 획득 모듈(220)은 제1 판독 모델을 내의 중간 레이어의 출력을 특징 벡터로 획득하는 동작을 통해 입력 골 영상에 대한 특징 벡터를 획득할 수 있다.The feature vector acquisition module 220 may acquire a feature vector for the input valley image through an operation of acquiring an output of an intermediate layer in the first read model as a feature vector.
특징 벡터 획득 모듈(220)은 타깃 골 연령에 상응하는 적어도 둘 이상의 후보 골 영상을 획득하고, 미리 학습된 제1 판독 모델을 통해 골 영상에 대응되는 특징 벡터를 추출할 수 있다. 예를 들어, 타깃 골 연령이 4세인 경우, 특징 벡터 획득 모듈은 3세에 대응하는 제1 후보 골 영상 및 5세에 대응하는 제2 후보 골 영상을 획득하고, 각각의 영상에 대응하는 제1 특징 벡터 및 제2 특징 벡터를 추출할 수 있다.The feature vector acquisition module 220 may acquire at least two or more candidate bone images corresponding to the target bone age, and extract a feature vector corresponding to the bone image through a pre-learned first reading model. For example, when the target bone age is 4 years old, the feature vector acquisition module acquires a first candidate bone image corresponding to 3 years of age and a second candidate bone image corresponding to 5 years of age, and the first candidate bone image corresponding to each image A feature vector and a second feature vector may be extracted.
다른 실시예에 따르면, 특징 벡터 획득 모듈(220)은 데이터베이스화된 특징 벡터들 중, 타깃 골 연령에 대응되는 제1 특징 벡터 및 제2 특징 벡터를 획득할 수 있다. 실시예에 따라서, 각각의 연령에 대응되는 특징 벡터가 데이터베이스화될 수 있고, 타깃 연령이 4세인 경우, 특징 벡터 획득 모듈(220)은 3세에 대응하는 특징 벡터 및 5세에 대응하는 특징 벡터를 데이터베이스에서 획득할 수 있다.According to another embodiment, the feature vector acquisition module 220 may acquire a first feature vector and a second feature vector corresponding to the target bone age from among the databased feature vectors. According to an embodiment, feature vectors corresponding to each age may be databased, and when the target age is 4 years old, the feature vector acquisition module 220 may provide a feature vector corresponding to 3 years old and a feature vector corresponding to 5 years old. can be obtained from the database.
합성 모듈(230)은 특징 벡터 획득 모듈(220)을 통해 획득한 제1 특징 벡터 및 제2 특징 벡터를 합성하여, 타깃 연령에 대응되는 제3 특징 벡터를 생성할 수 있다. 구체적으로, 합성 모듈(230)은 소정의 조건에 기초하여 제1 특징 벡터 및 제2 특징 벡터를 합성할지 여부를 우선적으로 결정할 수 있다. 예를 들어, 합성 모듈(230)은 수학식 1에 기초하여, 제1 특징 벡터 및 제2 특징 벡터를 합성할지 여부를 결정할 수 있다.The synthesis module 230 may generate a third feature vector corresponding to the target age by synthesizing the first feature vector and the second feature vector obtained through the feature vector acquisition module 220 . Specifically, the synthesis module 230 may preferentially determine whether to synthesize the first feature vector and the second feature vector based on a predetermined condition. For example, the synthesis module 230 may determine whether to synthesize the first feature vector and the second feature vector based on Equation (1).
Figure PCTKR2020015487-appb-img-000001
Figure PCTKR2020015487-appb-img-000001
L 1은 제1 특징 벡터에 대응되는 제1 레이블(나이), L 2는 제2 특징 벡터에 대응되는 제2 레이블(나이), m은 미리 설정된 임계값을 나타낼 수 있다.L 1 may represent a first label (age) corresponding to the first feature vector, L 2 may represent a second label (age) corresponding to the second feature vector, and m may represent a preset threshold value.
m은 합성 결과의 정확성을 위해 미리 설정되는 임계값일 수 있다. 구체적으로, 10세에 대응되는 특징 벡터를 생성하기 위하여 1세에 대응되는 특징 벡터 및 19세에 대응되는 특징 벡터를 사용하는 경우보다, 9세에 대응되는 특징 벡터 및 11세에 대응되는 특징 벡터를 이용하는 것이 보다 정확도가 높은 결과가 산출될 수 있다. 따라서, 합성 모듈(230)은 소정의 임계값에 따라 특징 벡터의 합성 여부를 결정할 수 있다.m may be a threshold value set in advance for the accuracy of the synthesis result. Specifically, compared to the case of using a feature vector corresponding to 1 year old and a feature vector corresponding to 19 years old to generate a feature vector corresponding to 10 years old, a feature vector corresponding to 9 years old and a feature vector corresponding to 11 years old Using , more accurate results can be obtained. Accordingly, the synthesis module 230 may determine whether to synthesize the feature vector according to a predetermined threshold value.
다른 실시예에 따르면, 합성 모듈(230)은 특징 벡터의 레이블 사이의 간격이 임계값 이하인 경우, 소정의 확률에 따라 합성을 진행할 지 여부를 결정할 수 있다.According to another embodiment, when the interval between the labels of the feature vectors is equal to or less than a threshold value, the synthesis module 230 may determine whether to proceed with the synthesis according to a predetermined probability.
합성 모듈(230)은 수학식 1에 기초하여, 특징 벡터를 합성하기로 결정된 경우, 수학식 2에 기초하여 특징 벡터 및 특징 벡터에 대응하는 레이블의 합성을 진행할 수 있다.When it is determined to synthesize the feature vector based on Equation 1, the synthesis module 230 may synthesize the feature vector and a label corresponding to the feature vector based on Equation 2 based on Equation 2 .
Figure PCTKR2020015487-appb-img-000002
Figure PCTKR2020015487-appb-img-000002
Figure PCTKR2020015487-appb-img-000003
Figure PCTKR2020015487-appb-img-000003
F 1 , F 2 , F 3는 제1 후보 골 영상, 제2 후보 골 영상, 및 타깃 연령에 대응되는 골 영상 각각에 대응되는 제1 특징 벡터, 제2 특징 벡터, 제3 특징 벡터이고, L 1 , L 2 , L 3는 각각 제1 특징 벡터에 대응되는 제1 레이블(나이), 제2 특징 벡터에 대응되는 제2 레이블(나이), 제3 특징 벡터에 대응되는 제3 레이블(나이)을 의미하고, β는 합성 정도를 나타내는 임의의 변수일 수 있으며, 예를 들어, β는 beta 분포를 따르는 변수일 수 있다.F 1 , F 2 , F 3 are a first candidate bone image, a second candidate bone image, and a first feature vector, a second feature vector, and a third feature vector respectively corresponding to the bone image corresponding to the target age, L 1 , L 2 and L 3 denote a first label (age) corresponding to the first feature vector, a second label (age) corresponding to the second feature vector, and a third label (age) corresponding to the third feature vector, respectively , β may be any variable indicating the degree of synthesis, for example, β may be a variable following a beta distribution.
수학식 2 및 수학식 3에 기초하여 제3 특징 벡터 및 이에 대응하는 제3 레이블(나이)가 결정될 수 있다.Based on Equations 2 and 3, a third feature vector and a third label (age) corresponding thereto may be determined.
합성 모듈(230)이 합성을 진행하는 방식은 Linear interpolation 또는 Fully connected layer를 이용하는 방식을 포함하는 임의의 데이터 합성 방식을 포함할 수 있음은 통상의 기술자가 이해할 것이다.A person skilled in the art will understand that the method of performing the synthesis by the synthesis module 230 may include any data synthesis method including a method using a linear interpolation or a fully connected layer.
골 영상 생성 및 전송 모듈(240)은 합성 모듈(230)을 통해 생성한 제3 특징 벡터에 기초하여 타깃 연령에 상응하는 골 영상을 생성할 수 있다. 보다 구체적으로, 골 영상 생성 및 전송 모듈(240)은 생성하고자 하는 목적 골 영상에 대응되는 특징 정보를 입력으로, 위 특징 정보에 대응되는 목적 골 영상을 생성하도록 미리 학습된 제2 판독 모델을 이용하여 타깃 골 영상을 생성할 수 있다. 예를 들어, 제2 판독 모델은 Autoencoder, GAN(Generative Adversarial Network)(예를 들어, CycleGAN, BiBigGAN)과 같은 인공 신경망 모델을 포함할 수 있고, 이에 기초하여 타깃 연령에 상응하는 골 영상이 생성될 수 있다. 골 영상 생성 및 전송 모듈(240)은 제2 판독 모델에 제3 특징 벡터(예를 들어 후보 골 영상으로부터 추출한 특징 벡터의 합성에 기초하여 생성된 특징 벡터)를 입력하여 제3 특징 벡터에 대응되는 타깃 골 영상(타깃 연령에 대응되는 골 영상)을 생성할 수 있다.The bone image generation and transmission module 240 may generate a bone image corresponding to the target age based on the third feature vector generated by the synthesis module 230 . More specifically, the bone image generation and transmission module 240 uses the pre-trained second reading model to generate the target bone image corresponding to the above characteristic information by inputting feature information corresponding to the target bone image to be generated. Thus, a target bone image may be generated. For example, the second reading model may include an artificial neural network model such as an Autoencoder or a Generative Adversarial Network (GAN) (eg, CycleGAN, BiBigGAN), based on which a bone image corresponding to the target age will be generated. can The bone image generation and transmission module 240 inputs a third feature vector (for example, a feature vector generated based on the synthesis of feature vectors extracted from a candidate bone image) into the second read model, and the third feature vector is A target bone image (a bone image corresponding to the target age) may be generated.
추가적으로, 제2 판독 모델에 포함되는 인공 신경망은 생성하고자 하는 목적 골 영상에 대응되는 특징 벡터와 더불어 초기 골 영상을 입력으로하여 특징 벡터에 대응되는 타깃 골 영상을 생성하도록 미리 학습될 수 있다. 초기 골 영상은 목적 골 영상으로 입력될 수 있으나, 이에 한정되는 것이 아니고 임의의 골 영상이 제공될 수 있다.Additionally, the artificial neural network included in the second reading model may be pre-trained to generate a target bone image corresponding to the feature vector by inputting an initial bone image together with a feature vector corresponding to the target bone image to be generated. The initial bone image may be input as a target bone image, but is not limited thereto, and any bone image may be provided.
골 영상 생성 및 전송 모듈(240)은 제3 특징 벡터 및, 제3 특징 벡터의 생성의 기초가 된 제1 후보 골 영상 또는 제2 후보 골 영상을 제2 판독 모델에 입력하여 제3 특징 벡터에 대응되는 타깃 골 영상을 생성할 수 있다. 제3 특징 벡터와 더불어 초기 골 영상을 입력으로 타깃 골 영상을 생성하도록 학습된 제2 판독 모델은 특징 벡터만을 이용하는 경우보다 개선된 품질의 타깃 골 영상을 생성할 수 있음은 통상의 기술자가 이해할 것이다.The bone image generation and transmission module 240 inputs the third feature vector and the first candidate bone image or the second candidate bone image, which is the basis for the generation of the third feature vector, into the second read model, and adds it to the third feature vector. A corresponding target bone image may be generated. Those skilled in the art will understand that the second reading model trained to generate a target bone image by inputting an initial bone image together with a third feature vector can generate a target bone image with improved quality compared to the case where only the feature vector is used. .
통상의 기술자는 골 영상 생성 및 전송 모듈(240)이 제시된 예시에 한정되지 않고, 특징 벡터에 기반하여 골 영상을 생성하는 임의의 방식을 통해 구현될 수 있음을 용이하게 이해할 수 있다.A person skilled in the art can easily understand that the bone image generation and transmission module 240 is not limited to the presented example, and may be implemented through any method for generating a bone image based on a feature vector.
골 영상 생성 및 전송 모듈(240)은 생성된 골 영상을 데이터베이스에 저장하거나, 외부 엔티티에 제공할 수 있다. 골 영상이 외부 엔티티에 제공되는 때, 골 영상 생성 및 전송 모듈(240)은 소정의 디스플레이 장치 등을 이용하거나, 구비된 통신부를 통해 외부 엔티티에 생성된 골 영상을 제공할 수 있다. 여기에서 외부 엔티티라고 함은, 상기 컴퓨팅 장치(100)의 사용자, 관리자, 담당 의료 전문가 등을 포함하나, 이 이외에도 상기 타깃 연령에 대한 골 영상을 필요로 하는 주체라면 어느 주체라도 포함되는 것으로 이해되어야 할 것이다. 예를 들어, 상기 외부 엔티티는 상기 골 영상을 활용하는 별도의 AI(artificial intelligence; 인공지능) 하드웨어 및/또는 소프트웨어 모듈을 포함하는 외부의 AI 장치일 수도 있다. 또한, 외부 엔티티에서의 '외부(external)'는 상기 골 영상을 이용하는 AI 하드웨어 및/또는 소프트웨어 모듈이 상기 컴퓨팅 장치(100)에 일체화되는 실시 예를 배제하도록 의도된 것이 아니라, 본 발명의 방법을 수행하는 하드웨어 및/또는 소프트웨어 모듈의 결과물인 골 영상이 타 방법의 입력 데이터로 활용될 수 있음을 시사하도록 이용된 것임을 밝혀둔다. 즉, 상기 외부 엔티티는 컴퓨팅 장치(100) 자체일 수도 있다.The bone image generation and transmission module 240 may store the generated bone image in a database or provide it to an external entity. When the bone image is provided to an external entity, the bone image generation and transmission module 240 may provide the generated bone image to an external entity using a predetermined display device or the like, or through a communication unit provided. Here, the external entity includes a user, an administrator, and a medical professional in charge of the computing device 100, but it should be understood that any subject that requires a bone image for the target age is included in addition to this. something to do. For example, the external entity may be an external AI device including a separate artificial intelligence (AI) hardware and/or software module utilizing the bone image. In addition, 'external' in the external entity is not intended to exclude an embodiment in which the AI hardware and/or software module using the bone image is integrated into the computing device 100, but the method of the present invention It should be noted that the bone image, which is the result of the performing hardware and/or software module, is used to suggest that it can be utilized as input data of other methods. That is, the external entity may be the computing device 100 itself.
도 2에 나타난 구성요소들은 설명의 편의상 하나의 컴퓨팅 장치에서 실현되는 것으로 예시되었으나, 본 발명의 방법을 수행하는 컴퓨팅 장치(100)는 복수개가 서로 연동되도록 구성될 수도 있다는 점이 이해될 것이다.Although the components shown in FIG. 2 are exemplified as being realized in one computing device for convenience of description, it will be understood that a plurality of computing devices 100 performing the method of the present invention may be configured to interwork with each other.
도 3은 일 실시 예에 따른 골 영상 생성 방법을 설명하기 위한 흐름도이다.3 is a flowchart illustrating a method for generating a bone image according to an embodiment.
도 3을 참조하면, 컴퓨팅 장치는 단계(310)에서 통신부를 통해 타깃 골 연령을 입력을 받을 수 있다.Referring to FIG. 3 , the computing device may receive a target bone age through the communication unit in step 310 .
컴퓨팅 장치는 단계(320)에서 타깃 골 연령에 대응되는 골 영상을 생성하기 위한 제1 후보 골 영상 및 제2 후보 골 영상을 획득할 수 있다. 컴퓨팅 장치는 제1 후보 골 영상에 대응되는 골 연령 및 제2 후보 골 영상에 대응되는 골 연령 사이에 상기 타깃 골 연령이 포함되면, 제1 후보 골 영상의 골 연령 및 제2 후보 골 영상의 골 연령 사이의 간격이 미리 설정된 소정의 간격보다 작은지 여부를 판단할 수 있다. 컴퓨팅 장치는 제1 후보 골 영상의 골 연령 및 제2 후보 골 영상의 골 연령 사이의 간격이 미리 설정된 소정의 간격보다 작거나 같은 경우, 이후 단계(330)을 수행할 수 있다. 컴퓨팅 장치는 제1 후보 골 영상의 골 연령 및 제2 후보 골 영상의 골 연령 사이의 간격이 미리 설정된 소정의 간격보다 큰 경우, 해당 간격이 미리 설정된 소정의 간격보다 작거나 같도록 제1 후보 골 영상 및 제2 후보 골 영상 중 적어도 하나를 다시 획득하는 과정을 수행할 수 있다. 이는 앞서 설명한 바와 같이, 타깃 골 연령에 대응되는 영상을 생성하는 과정에서, 타깃 골 연령에 가까운 연령에 대응하는 골 영상을 이용하여 산출 결과를 정확도를 향상시키기 위함이다. The computing device may acquire a first candidate bone image and a second candidate bone image for generating a bone image corresponding to the target bone age in operation 320 . If the target bone age is included between the bone age corresponding to the first candidate bone image and the bone age corresponding to the second candidate bone image, the computing device is configured to include the bone age of the first candidate bone image and the bone age of the second candidate bone image. It may be determined whether the interval between ages is smaller than a preset predetermined interval. When the interval between the bone age of the first candidate bone image and the bone age of the second candidate bone image is less than or equal to a predetermined interval, the computing device may perform the subsequent step 330 . If the interval between the bone age of the first candidate goal image and the bone age of the second candidate goal image is greater than a predetermined interval, the computing device is configured to configure the first candidate bone to be equal to or smaller than the predetermined interval. A process of re-acquiring at least one of the image and the second candidate bone image may be performed. This is to improve the accuracy of the calculation result by using the bone image corresponding to the age close to the target bone age in the process of generating the image corresponding to the target bone age, as described above.
컴퓨팅 장치는 단계(330)에서 미리 학습된 제1 판독 모델에 기초하여, 상기 제1 후보 골 영상 및 제2 후보 골 영상 각각에 대한 제1 특징 벡터 및 제2 특징 벡터를 추출할 수 있다. 미리 학습된 제1 판독 모델은 앞서 설명한 바와 같이, 입력 골 영상에서 특징 벡터를 추출하고, 이에 기초하여 입력 골 영상의 연령을 판독하도록 학습된 인공 신경망 일 수 있다.The computing device may extract a first feature vector and a second feature vector for each of the first candidate bone image and the second candidate bone image based on the first read model learned in step 330 . As described above, the pre-trained first reading model may be an artificial neural network trained to extract a feature vector from the input bone image and read the age of the input bone image based thereon.
제1 판독 모델은 가공된 학습 골 영상에 기초하여, 입력 골 영상의 특징 벡터를 출력하도록 학습된 것일 수 있다. 대응되는 나이가 레이블링된 골 영상을 학습데이터로 이용하여, 제1 판독 모델이 골 영상의 연령을 판독하도록 학습되는 방식은 통상의 기술자가 용이하게 이해할 수 있다. 본 발명에서 사용되는 제1 판독 모델은 판독의 정확도를 향상시키기 위하여 가공된 학습 골 영상을 이용하여 학습될 수 있다. 구체적으로, 가공된 학습 골 영상은, 기 존재하는 학습 골 영상과 더불어, 학습 골 영상에서 획득한 제1 학습 골 영상 및 제2 학습 골 영상을 합성하여 생성되는 제3 학습 골 영상을 더 포함할 수 있다. 제3 학습 골 영상은 제1 학습 골 영상에 대응되는 제1 골 연령 및 제2 학습 골 영상에 대응되는 제2 골 연령 사이의 연령에 대응되는 제3 골 연령에 대응될 수 있다. 가공된 학습 골 영상은 기존 학습 데이터에 비해 개별 연령에 대응되는 요소들을 빠짐없이 포함할 수 있다. 본원 발명은 가공된 학습 골 영상을 이용하여 학습된 판독 모델을 통해 보다 정확도 높은 판독 결과를 제공할 수 있는 현저한 효과가 있다.The first reading model may be trained to output a feature vector of an input bone image based on the processed training bone image. A person skilled in the art can easily understand how the first reading model is trained to read the age of the bone image by using the corresponding age-labeled bone image as training data. The first reading model used in the present invention may be learned using a processed learning bone image to improve the accuracy of reading. Specifically, the processed learning goal image may further include a third learning goal image generated by synthesizing the first learning goal image and the second learning goal image obtained from the learning goal image together with the existing learning goal image. can The third learning bone image may correspond to a third bone age corresponding to an age between the first bone age corresponding to the first learning bone image and the second bone age corresponding to the second learning bone image. The processed learning bone image may include all elements corresponding to individual ages compared to existing learning data. The present invention has a remarkable effect in that it is possible to provide a reading result with higher accuracy through a reading model learned using a processed learning bone image.
컴퓨팅 장치는 단계(340)에서 제1 특징 벡터 및 상기 제2 특징 벡터를 합성하여 상기 타깃 골 연령에 대응되는 제3 특징 벡터를 생성할 수 있다. 컴퓨팅 장치는 제1 특징 벡터 및 제2 특징 벡터를 보간(interpolation)함으로써, 제3 특징 벡터를 생성할 수 있으며, 제3 특징 벡터를 생성하는 방식은 앞서 설명한 바와 같이, 제시된 예시에 한정되지 않는다.The computing device may generate a third feature vector corresponding to the target bone age by synthesizing the first feature vector and the second feature vector in operation 340 . The computing device may generate the third feature vector by interpolating the first feature vector and the second feature vector, and the method of generating the third feature vector is not limited to the presented example as described above.
컴퓨팅 장치는 단계(350)에서 제3 특징 벡터에 기초하여, 상기 타깃 골 연령에 대응되는 타깃 골 영상을 생성할 수 있다.The computing device may generate a target bone image corresponding to the age of the target bone based on the third feature vector in operation 350 .
도 4는 일 실시 예에 따른 골 영상 생성 방법이 수행되는 일례를 도시하는 도면이다.4 is a diagram illustrating an example in which a method for generating a bone image according to an embodiment is performed.
영상을 통해 골 연령을 판독하는 시스템은, 개별 연령에 대응되는 참조 골 영상을 판독자에게 제공할 수 있고, 판독자는 참조 골 영상과의 비교를 통해 입력된 골 영상의 연령을 판독할 수 있다. 데이터베이스에 저장된 참조 골 영상 중 좌측과 같이 3세에 대응되는 참조 골 영상(410)이 부재한 경우, 판독자는 해당 연령대의 골 연령 판독이 어려울 수 있다. 뿐만 아니라, 발달 속도가 매우 빠른 유아의 경우, 수개월 만에도 골 발육의 차이가 클 수 있기 때문에, 1세에 대응하는 참조 골 영상 및 2세에 대응하는 참조 골 영상을 이용하더라도, 1세에서 2세 사이의 유아의 골 연령 예측은 매우 어려울 수 있다.The system for reading bone age through the image may provide a reference bone image corresponding to an individual age to the reader, and the reader may read the age of the input bone image through comparison with the reference bone image. If the reference bone image 410 corresponding to the age of 3 is absent among the reference bone images stored in the database as shown on the left, it may be difficult for the reader to read the bone age of the corresponding age group. In addition, in the case of an infant with a very rapid development rate, since the difference in bone growth can be large even after a few months, even if the reference bone image corresponding to the age of 1 and the reference bone image corresponding to the age of 2 are used, 1 to 2 years of age Predicting the bone age of infants between the ages of three can be very difficult.
컴퓨팅 장치(420)는 누락된 3세의 참조 골 영상을 생성하기 위하여, 2세에 대응되는 제1 후보 골 영상(431) 및 제2 후보 골 영상(432)를 수신하고, 이에 기초하여, 3세에 대응하는 골 영상(440)을 생성할 수 있다.The computing device 420 receives the first candidate bone image 431 and the second candidate bone image 432 corresponding to the two-year-old in order to generate the missing three-year-old reference bone image, and based on this, A bone image 440 corresponding to the age may be generated.
컴퓨팅 장치(420)는 제시된 방식뿐만 아니라, 데이터베이스에 나이에 기초한 레이블링을 통해 미리 저장된 특징 벡터에 기반하여 타깃 연령에 대응하는 골 영상을 생성할 수도 있다. 이 경우, 타깃 연령에 대응하는 골 영상을 생성하는 과정에서, 컴퓨팅 장치(420)는 골 영상을 입력 받지 않고, 타깃 골 연령에 대한 정보만을 통해 골 영상을 생성할 수 있다.The computing device 420 may generate a bone image corresponding to the target age based on a feature vector previously stored through age-based labeling in a database as well as the suggested method. In this case, in the process of generating the bone image corresponding to the target age, the computing device 420 may generate the bone image only through information on the target bone age without receiving the bone image.
도 5는 연령에 따른 특징 벡터의 분포를 도시하는 도면이다. 5 is a diagram illustrating a distribution of feature vectors according to age.
도 5를 참조하면, 그래프(510)는 남성의 골 영상에 대한 특징 벡터의 분포일 수 있고, 그래프(520)는 여성의 골 영상에 대한 특징 벡터의 분포일 수 있다. 그래프(510, 520)에 포함된 각 포인트(511, 512)는 개별 골 영상의 특징 벡터에 대응될 수 있고, 포인트(511, 512)의 색상은 대응되는 연령을 나타낼 수 있다. 그래프(510, 520)을 참조하면, 유사한 연령에 대응되는 포인트(511, 512)는 인접 영역에 분포하는 것을 확인할 수 있다. 이러한 분포를 고려할 때에, 인접 연령의 특징 벡터의 합성을 통해 산출된 특징 벡터는 타깃 연령의 특성을 드러낼 수 있음을 이해할 수 있다.Referring to FIG. 5 , a graph 510 may be a distribution of a feature vector for a male bone image, and a graph 520 may be a distribution of a feature vector for a female bone image. Each of the points 511 and 512 included in the graphs 510 and 520 may correspond to a feature vector of an individual bone image, and the color of the points 511 and 512 may indicate a corresponding age. Referring to graphs 510 and 520 , it can be seen that points 511 and 512 corresponding to similar ages are distributed in adjacent areas. Considering this distribution, it can be understood that the feature vector calculated through the synthesis of feature vectors of adjacent ages can reveal the characteristics of the target age.
도 6은 일 실시예에 따른 골 영상 생성 방법이 적용된 인터페이스의 일례를 도시하는 도면이다.6 is a diagram illustrating an example of an interface to which a method for generating a bone image according to an embodiment is applied.
도 6을 참조하면, 인터페이스(610)를 통해 현재 골 영상(620)과 비슷한 연령의 참조 골 영상(621, 622, 623)이 제공될 수 있다. 각각의 연령에 대응되는 참조 골 영상(621, 622, 623)은 데이터베이스에 미리 저장된 것일 수 있다. 판독자는 판독 과정에서, 10세와 11세 사이의 참조 골 영상이 요구되는 경우, 그래픽 오브젝트(630)에 대한 사용자 입력을 토대로 해당 연령에 대응되는 참조 골 영상에 대한 생성 요청을 진행할 수 있다. 그래픽 오브젝트(630)에 대한 사용자 입력이 이루어지는 경우, 타깃 연령을 입력하는 그래픽 오브젝트(640)이 화면 상에 디스플레이될 수 있다. 컴퓨팅 장치는 타깃 연령에 대응되는 참조 골 영상(650)를 생성하여, 화면 상에 디스플레이할 수 있다.Referring to FIG. 6 , reference bone images 621 , 622 , and 623 of a similar age to the current bone image 620 may be provided through the interface 610 . The reference bone images 621 , 622 , and 623 corresponding to each age may be pre-stored in a database. In the reading process, when a reference bone image between the ages of 10 and 11 is requested, the reader may request to generate a reference bone image corresponding to the corresponding age based on a user input to the graphic object 630 . When a user input is made to the graphic object 630 , the graphic object 640 for inputting the target age may be displayed on the screen. The computing device may generate a reference bone image 650 corresponding to the target age and display it on the screen.
도 7은 본 발명에 따른 의료 영상 생성 방법을 수행하는 컴퓨팅 장치의 하드웨어 또는 소프트웨어 구성요소를 도시한 예시적 블록도이다.7 is an exemplary block diagram illustrating hardware or software components of a computing device performing a method for generating a medical image according to the present invention.
도 7에 도시된 개별 모듈들은, 예컨대, 컴퓨팅 장치(100)에 포함된 통신부(110)나 프로세서(120), 또는 상기 통신부(110) 및 프로세서(120)의 연동에 의하여 구현될 수 있음은 통상의 기술자가 이해할 수 있을 것이다.The individual modules shown in FIG. 7 may be implemented by, for example, the communication unit 110 or the processor 120 included in the computing device 100 or the communication unit 110 and the processor 120 interworking. technicians will be able to understand.
도 7을 참조하여 본 발명에 따른 방법 및 장치의 구성을 간략히 개관하면, 컴퓨팅 장치(100)는 그 구성요소로서 타깃 등급 정보 수신 모듈(710)을 포함할 수 있다. 타깃 등급 정보 수신 모듈(710)은 컴퓨팅 장치에 포함되는 입력 장치, 연동된 사용자 단말로부터 타깃 등급 정보를 수신할 수 있다. 본 명세서에서 언급되는 등급이란, 임의의 조건에 따라 의료 영상에 대해 수치화된 등급을 의미할 수 있다. 예를 들어, 안저 영상의 경우 당뇨성망막병증의 진행 정도에 따라 정상 상태에 대응되는 0 등급에서 진행 정도가 최대 수준인 4 등급으로 분류될 수 있다. 이외에도, 골 영상은 앞서 설명된 바와 같이 골 연령에 따라 등급이 분류될 수 있으며, 추가적으로, 골 영상에 존재하는 병변의 악성도에 따라 개별 골 영상의 등급이 결정될 수 있다. 이와 같이 본 명세서에서 언급되는 등급은 소정의 조건(예를 들어, 연령, 병변의 악성도, 소정 질환의 진행 정도 등)에 기초하여 개별 의료 영상에 대해 결정되는 수치화된 정보를 의미할 수 있음은 통상의 기술자가 이해할 것이다. 또한, 타깃 등급 정보는 기 존재하는 의료 영상을 통해 새롭게 생성하고자 하는 의료 영상의 등급에 대한 정보를 의미할 수 있다.A brief overview of the configuration of the method and apparatus according to the present invention with reference to FIG. 7 , the computing device 100 may include a target rating information receiving module 710 as a component thereof. The target rating information receiving module 710 may receive target rating information from an input device included in the computing device and an interlocked user terminal. The grade referred to in this specification may mean a numerical grade for a medical image according to any condition. For example, the fundus image may be classified into a grade 0 corresponding to a normal state to a grade 4 having a maximum degree of progression according to the degree of progression of diabetic retinopathy. In addition, the bone image may be classified according to bone age as described above, and additionally, the grade of the individual bone image may be determined according to the malignancy of the lesion present in the bone image. As such, the grade referred to in this specification may refer to quantified information determined for an individual medical image based on a predetermined condition (eg, age, malignancy of a lesion, degree of progression of a predetermined disease, etc.) Those of ordinary skill in the art will understand. Also, the target grade information may refer to information on a grade of a medical image to be newly created through an existing medical image.
타깃 등급 정보 수신 모듈(710)을 통해 획득한 타깃 등급 정보에 기초하여, 특징 벡터 획득 모듈(720)은 타깃 등급 정보에 대응되는 의료 영상을 생성하기 위한 특징 벡터를 획득할 수 있다.Based on the target grade information acquired through the target grade information receiving module 710 , the feature vector acquisition module 720 may acquire a feature vector for generating a medical image corresponding to the target grade information.
일 실시예에 따르면, 특징 벡터 획득 모듈(720)은 입력 의료 영상에서 특징 벡터를 추출하고, 이에 기반하여 의료 영상의 등급을 판독하도록 미리 학습된 제1' 판독 모델을 포함할 수 있다. 제1' 판독 모델은 앞선 도 2의 특징 벡터 획득 모듈(720)과 동일한 종류의 인공 신경망에 기초하여 구현될 수 있음은 통상의 기술자가 이해할 것이다.According to an embodiment, the feature vector acquisition module 720 may include a pre-trained first 'reading model to extract a feature vector from an input medical image and read a grade of the medical image based thereon. Those skilled in the art will understand that the first 'reading model may be implemented based on the same kind of artificial neural network as the feature vector acquisition module 720 of FIG. 2 above.
일 실시예에 따르면, 특징 벡터 획득 모듈(720)에 포함된 제1' 판독 모델은, 대응되는 등급이 레이블링된 학습 의료 영상에 기초하여, 입력 의료 영상에 대한 특징 벡터를 추출하고, 추출한 특징 벡터에 기초하여 입력 의료 영상에 대응되는 등급을 판독하도록 미리 학습될 수 있다.According to an embodiment, the first 'reading model included in the feature vector acquisition module 720 extracts a feature vector for the input medical image based on the training medical image labeled with the corresponding class, and the extracted feature vector It may be learned in advance to read a grade corresponding to the input medical image based on the .
또한, 제1' 판독 모델은 기 존재하는 학습 의료 영상 사이의 합성에 기초하여 생성된 합성 학습 의료 영상에 기초하여 추가적인 학습이 진행될 수 있다. 예를 들어, 기 존재하는 학습 의료 영상에 당뇨성망막병증의 진행 정도가 1등급, 3등급에 대응되는 안저 영상이 포함된 경우, 제1' 판독 모델은 기 존재하는 1등급, 3등급에 대응되는 안저 영상과 더불어, 1등급, 3등급에 대응되는 안저 영상의 합성에 기초하여 생성된 2등급에 대응되는 안저 영상에 기초하여 학습이 진행될 수 있다. 기 존재하는 학습 의료 영상을 이용한 합성 학습 의료 영상의 생성은 이하 설명되는 합성 모듈(730) 및 의료 영상 생성 및 전송 모듈(740)의 동작에 기초하여 수행될 수 있다. 또한, 합성 학습 의료 영상이 생성되는 방식은 예시된 바에 한정되지 않고, 기 존재하는 학습 의료 영상을 토대로 보다 다양한 등급의 합성 학습 의료 영상이 생성될 수 있다. 예를 들어, 1등급 및 3등급에 대응되는 학습 안저 영상을 통해, 1.6등급, 2.2등급, 2.5등급 등에 대응되는 합성 학습 안저 영상이 생성될 수 있다.In addition, the first 'reading model may be additionally learned based on the synthesized training medical image generated based on the synthesis between the existing training medical images. For example, if the existing learning medical image includes fundus images corresponding to grades 1 and 3 of the progression of diabetic retinopathy, the first reading model corresponds to the existing grades 1 and 3 In addition to the fundus image to be used, learning may be performed based on the fundus image corresponding to the second grade generated based on the synthesis of the fundus images corresponding to the first and third grades. The generation of the synthetic learning medical image using the existing learning medical image may be performed based on the operations of the synthesizing module 730 and the medical image generating and transmitting module 740 , which will be described below. In addition, the method of generating the synthetic learning medical image is not limited to the example shown, and more various grades of the synthetic learning medical image may be generated based on the existing medical learning image. For example, synthetic learning fundus images corresponding to grades 1.6, 2.2, 2.5, and the like may be generated through learning fundus images corresponding to grades 1 and 3.
특징 벡터 획득 모듈(720)은 제1' 판독 모델의 중간 레이어의 출력을 특징 벡터로 획득하는 동작을 통해 입력 의료 영상에 대한 특징 벡터를 획득할 수 있다.The feature vector acquisition module 720 may acquire a feature vector for the input medical image through an operation of acquiring the output of the intermediate layer of the first 'reading model as a feature vector.
특징 벡터 획득 모듈(720)은 타깃 등급에 상응하는 의료 영상을 생성하기 위한 제1 후보 의료 영상 및 제2 후보 의료 영상을 획득하고, 제1' 판독 모델을 통해 제1 후보 의료 영상 및 제2 후보 의료 영상 각각에 대응되는 제1' 특징 벡터 및 제2' 특징 벡터를 추출할 수 있다. 예를 들어, 안저 영상에 대해 타깃 등급 정보가 당뇨성망막병증의 진행 정도인 2 등급으로 입력된 경우, 특징 벡터 획득 모듈(720)은 정상 상태에 해당하는 0 등급의 제1 후보 의료 영상(제1 후보 안저 영상)과 당뇨성망막병증이 일정 정도 진행된 3 등급의 제2 의료 영상(제2 후보 안저 영상)을 획득하고, 각각의 후보 의료 영상에 대응하는 제1' 특징 벡터 및 제2' 특징 벡터를 추출할 수 있다. 후보 안저 영상의 획득하는 방식은 제시된 예시적인 방식에 한정되는 것은 아니고, 1 등급의 제1 후보 안저 영상과 4 등급의 제2 후보 안저 영상이 2 등급인 타깃 등급에 대응되는 안저 영상을 생성하기 위하여 획득될 수 있음은 통상의 기술자가 이해할 것이다.The feature vector acquisition module 720 acquires a first candidate medical image and a second candidate medical image for generating a medical image corresponding to the target class, and uses the first 'reading model to obtain the first candidate medical image and the second candidate. A first' feature vector and a second' feature vector corresponding to each of the medical images may be extracted. For example, when target grade information is input as grade 2, which is the degree of progression of diabetic retinopathy with respect to the fundus image, the feature vector acquisition module 720 generates a first candidate medical image of grade 0 corresponding to the normal state ( 1 candidate fundus image) and a grade 3 second medical image (second candidate fundus image) in which diabetic retinopathy has progressed to a certain extent, and a first 'feature vector and a second' feature corresponding to each candidate medical image vector can be extracted. The method of acquiring the candidate fundus image is not limited to the presented exemplary method, and the first candidate fundus image of the first grade and the second candidate fundus image of the fourth grade are used to generate a fundus image corresponding to the target grade of the second grade. It will be understood by those skilled in the art that it can be obtained.
다른 실시예에 따르면, 특징 벡터 획득 모듈(720)은 데이터베이스화된 특징 벡터들 중, 타깃 등급 정보에 대응되는 제1' 특징 벡터 및 제2' 특징 벡터를 획득할 수 있다. 실시예에 따라서, 각각의 등급에 대응되는 특징 벡터가 데이터베이스화될 수 있고, 타깃 등급이 2 등급인 경우, 특징 벡터 획득 모듈(720)은 0 등급에 대응하는 특징 벡터 및 3 등급에 대응하는 특징 벡터를 데이터베이스에서 획득할 수 있다.According to another embodiment, the feature vector acquisition module 720 may acquire a first' feature vector and a second' feature vector corresponding to target class information from among the databased feature vectors. According to an embodiment, a feature vector corresponding to each grade may be databased, and when the target grade is grade 2, the feature vector obtaining module 720 may provide a feature vector corresponding to grade 0 and a feature corresponding to grade 3 The vector can be obtained from the database.
합성 모듈(730)은 특징 벡터 획득 모듈(720)을 통해 획득한 제1' 특징 벡터 및 제2' 특징 벡터를 합성하여, 타깃 등급 정보에 대응되는 제3' 특징 벡터를 생성할 수 있다. 구체적으로, 합성 모듈(730)은 소정의 조건에 기초하여 제1' 특징 벡터 및 제2' 특징 벡터를 합성할지 여부를 우선적으로 결정할 수 있다. 예를 들어, 합성 모듈(730)은 수학식 4에 기초하여, 제1' 특징 벡터 및 제2' 특징 벡터를 합성할지 여부를 결정할 수 있다.The synthesis module 730 may synthesize the first' feature vector and the second' feature vector obtained through the feature vector acquisition module 720 to generate a third' feature vector corresponding to the target class information. Specifically, the synthesis module 730 may preferentially determine whether to synthesize the first' feature vector and the second' feature vector based on a predetermined condition. For example, the synthesis module 730 may determine whether to synthesize the first' feature vector and the second' feature vector based on Equation (4).
Figure PCTKR2020015487-appb-img-000004
Figure PCTKR2020015487-appb-img-000004
L 1'는 제1' 특징 벡터에 대응되는 제1' 레이블(등급), L 2'는 제2' 특징 벡터에 대응되는 제2' 레이블(등급), m은 미리 설정된 임계값을 나타낼 수 있다.L 1' may represent a first label (grade) corresponding to the first' feature vector, L 2' may represent a second' label (grade) corresponding to the second' feature vector, and m may represent a preset threshold. .
m은 합성 결과의 정확성을 위해 미리 설정되는 임계값일 수 있다. 구체적으로, 당뇨성망막병증 2등급에 대응되는 특징 벡터를 생성하기 위하여 당뇨성망막병증이 발생하지 않는 안저 영상에 해당하는 0등급에 대응되는 특징 벡터 및 당뇨성망막병증 4등급에 대응되는 특징 벡터를 사용하는 경우보다, 당뇨성망막병증 1등급에 대응되는 특징 벡터 및 당뇨성망막병증 3등급에 대응되는 특징 벡터를 이용하는 것이 보다 정확도가 높은 결과가 산출될 수 있다. 따라서, 합성 모듈(730)은 소정의 임계값에 따라 특징 벡터의 합성 여부를 결정할 수 있다.m may be a threshold value set in advance for the accuracy of the synthesis result. Specifically, in order to generate a feature vector corresponding to diabetic retinopathy grade 2, a feature vector corresponding to grade 0 corresponding to a fundus image in which diabetic retinopathy does not occur and a feature vector corresponding to grade 4 diabetic retinopathy A result with higher accuracy may be obtained by using a feature vector corresponding to grade 1 diabetic retinopathy and a feature vector corresponding to grade 3 diabetic retinopathy than when using . Accordingly, the synthesis module 730 may determine whether to synthesize the feature vector according to a predetermined threshold value.
다른 실시예에 따르면, 합성 모듈(730)은 특징 벡터의 레이블 사이의 간격이 임계값 이하인 경우, 소정의 확률에 따라 합성을 진행할 지 여부를 결정할 수 있다.According to another embodiment, when the interval between the labels of the feature vectors is less than or equal to a threshold value, the synthesis module 730 may determine whether to proceed with synthesis according to a predetermined probability.
합성 모듈(730)은 수학식 4에 기초하여, 특징 벡터를 합성하기로 결정된 경우, 수학식 5 및 수학식 6에 기초하여 특징 벡터 및 특징 벡터에 대응하는 레이블의 합성을 진행할 수 있다.When it is determined to synthesize the feature vector based on Equation 4, the synthesis module 730 may perform synthesis of the feature vector and a label corresponding to the feature vector based on Equations 5 and 6 based on Equation (4).
Figure PCTKR2020015487-appb-img-000005
Figure PCTKR2020015487-appb-img-000005
Figure PCTKR2020015487-appb-img-000006
Figure PCTKR2020015487-appb-img-000006
F 1' , F 2' , F 3'는 각각 제1' 특징 벡터, 제2' 특징 벡터 및 제3' 특징 벡터이고, L 1' , L 2' , L 3'는 각각 제1' 특징 벡터에 대응되는 제1' 레이블, 제2' 특징 벡터에 대응되는 제2' 레이블 및 제3' 특징 벡터에 대응되는 제3' 레이블을 의미하고, β는 합성 정도를 나타내는 임의의 변수일 수 있으며, 예를 들어, β는 beta 분포를 따르는 변수일 수 있다.F 1' , F 2' and F 3' are the first, second, and third feature vectors, respectively, L 1' , L 2' and L 3' denote a first label corresponding to the first feature vector, a second label corresponding to the second feature vector, and a third label corresponding to the third feature vector, respectively. , β may be any variable indicating the degree of synthesis, for example, β may be a variable following a beta distribution.
수학식 5 및 수학식 6에 기초하여 제3' 특징 벡터 및 이에 대응하는 제3' 레이블(등급)이 결정될 수 있다.Based on Equations 5 and 6, a 3' feature vector and a 3' label (grade) corresponding thereto may be determined.
합성 모듈(730)이 합성을 진행하는 방식은 Linear interpolation 또는 Fully connected layer를 이용하는 방식을 포함하는 임의의 데이터 합성 방식을 포함할 수 있음은 통상의 기술자가 이해할 것이다.A person skilled in the art will understand that the method for performing the synthesis by the synthesis module 730 may include any data synthesis method including a method using a linear interpolation or a fully connected layer.
의료 영상 생성 및 전송 모듈(740)은 합성 모듈(730)을 통해 생성한 제3' 특징 벡터에 기초하여 타깃 등급에 상응하는 의료 영상을 생성할 수 있다. 구체적으로, 의료 영상 생성 및 전송 모듈(740)은 생성하고자 하는 목적 의료 영상에 대응되는 특징 정보를 입력으로, 위 특징 정보에 대응되는 목적 의료 영상을 생성하도록 미리 학습된 제2'판독 모델을 이용하여 타깃 의료 영상을 생성할 수 있다. 예를 들어,제2' 판독 모델은 Autoencoder, GAN(예를 들어, CycleGAN, BiBigGAN)과 같은 인공 신경망 모델을 포함할 수 있고, 이에 기초하여 타깃 등급에 상응하는 의료 영상이 생성될 수 있다. 의료 영상 생성 및 전송 모듈(740)은 제2' 판독 모델에 제3' 특징 벡터(예를 들어, 후보 의료 영상으로부터 획득한 특징 벡터의 합성에 기초하여 생성된 특징 벡터)를 입력하여 제3' 특징 벡터에 대응되는 의료 영상(타깃 등급에 대응되는 의료 영상)을 생성할 수 있다.The medical image generating and transmitting module 740 may generate a medical image corresponding to the target class based on the 3' feature vector generated by the synthesizing module 730 . Specifically, the medical image generation and transmission module 740 uses the pre-trained second 'reading model to generate the target medical image corresponding to the characteristic information by inputting characteristic information corresponding to the target medical image to be generated. Thus, a target medical image may be generated. For example, the second 'reading model may include an artificial neural network model such as an autoencoder or a GAN (eg, CycleGAN, BiBigGAN), and based on this, a medical image corresponding to a target grade may be generated. The medical image generation and transmission module 740 inputs a 3' feature vector (eg, a feature vector generated based on the synthesis of a feature vector obtained from a candidate medical image) into the second' read model to receive a third ' A medical image corresponding to the feature vector (a medical image corresponding to the target class) may be generated.
추가적으로, 제2' 판독 모델에 포함되는 인공 신경망은 생성하고자 하는 목적 의료 영상에 대응되는 특징 벡터와 더불어 초기 의료 영상을 입력으로하여 특징 벡터에 대응되는 목적 의료 영상을 생성하도록 미리 학습될 수 있다. 초기 의료 영상은 목적 의료 영상이 제공될 수 있으나, 이에 한정되는 것은 아니고, 임의의 의료 영상이 제공될 수 있다.Additionally, the artificial neural network included in the second 'reading model may be pre-trained to generate a target medical image corresponding to the feature vector by inputting an initial medical image along with a feature vector corresponding to the target medical image to be generated. As the initial medical image, a target medical image may be provided, but the present invention is not limited thereto, and an arbitrary medical image may be provided.
초기 의료 영상을 추가적인 입력으로하여 학습된 제2' 판독 모델을 이용하여, 의료 영상 생성 및 전송 모듈(740)은 제3' 특징 벡터 및, 제3' 특징 벡터의 생성의 기초가 된 제1 후보 의료 영상 또는 제2 후보 의료 영상으로부터 제3' 특징 벡터에 대응되는 타깃 의료 영상을 생성할 수 있다. 제3' 특징 벡터와 더불어 초기 의료 영상을 입력으로 타깃 의료 영상을 생성하도록 학습된 제2' 판독 모델은 특징 벡터만을 이용하는 경우보다 개선된 품질의 타깃 의료 영상을 생성할 수 있다.Using the second 'reading model learned by using the initial medical image as an additional input, the medical image generation and transmission module 740 generates a third' feature vector and a first candidate that is a basis for generating the third' feature vector. A target medical image corresponding to the 3' feature vector may be generated from the medical image or the second candidate medical image. The second 'reading model trained to generate a target medical image by inputting an initial medical image along with the 3' feature vector may generate a target medical image of improved quality compared to the case where only the feature vector is used.
통상의 기술자는 의료 영상 생성 및 전송 모듈(740)이 제시된 예시에 한정되지 않고, 후보 의료 영상으로부터 추출한 특징 벡터의 합성에 기초하여 타깃 의료 영상을 생성하는 임의의 방식을 통해 구현될 수 있음을 용이하게 이해할 수 있다.A person skilled in the art can easily realize that the medical image generation and transmission module 740 is not limited to the presented example, and can be implemented through any method of generating a target medical image based on the synthesis of feature vectors extracted from a candidate medical image. can be understood
의료 영상 생성 및 전송 모듈(740)은 생성된 의료 영상을 데이터베이스에 저장하거나, 외부 엔티티에 제공할 수 있다. 의료 영상이 외부 엔티티에 제공되는 때, 의료 영상 생성 및 전송 모듈(740)은 소정의 디스플레이 장치 등을 이용하거나, 구비된 통신부를 통해 외부 엔티티에 생성된 의료 영상을 제공할 수 있다. 여기에서 외부 엔티티의 의미는 앞서 도 2를 통해 설명된 바와 동일할 수 있다.The medical image generation and transmission module 740 may store the generated medical image in a database or provide it to an external entity. When a medical image is provided to an external entity, the medical image generation and transmission module 740 may provide the generated medical image to an external entity using a predetermined display device or the like or through a communication unit provided therein. Here, the meaning of the external entity may be the same as described above with reference to FIG. 2 .
도 7에 나타난 구성요소들은 설명의 편의상 하나의 컴퓨팅 장치에서 실현되는 것으로 예시되었으나, 본 발명의 방법을 수행하는 컴퓨팅 장치(100)는 복수개가 서로 연동되도록 구성될 수도 있다는 점이 이해될 것이다.Although the components shown in FIG. 7 are exemplified as being realized in one computing device for convenience of description, it will be understood that a plurality of computing devices 100 performing the method of the present invention may be configured to interwork with each other.
도 8은 일 실시 예에 따른 의료 영상 생성 방법을 설명하기 위한 흐름도이다.8 is a flowchart illustrating a method of generating a medical image according to an exemplary embodiment.
도 8을 참조하면, 컴퓨팅 장치는 단계(810)에서 타깃 등급 정보를 수신할 수 있다. 타깃 등급 정보는 새롭게 생성하고자 하는 의료 영상의 타깃 등급에 대한 정보에 해당하며 통신부를 통해 외부 엔티티로부터 수신하거나 구비된 입력 장치를 통해 입력받을 수 있음은 통상의 기술자가 이해할 것이다.Referring to FIG. 8 , the computing device may receive target rating information in operation 810 . A person skilled in the art will understand that the target grade information corresponds to information on a target grade of a medical image to be newly generated, and may be received from an external entity through a communication unit or may be input through an provided input device.
컴퓨팅 장치는 단계(820)에서 타깃 등급 정보에 대응되는 의료 영상을 생성하기 위한 제1 후보 의료 영상 및 제2 후보 의료 영상을 획득할 수 있다. 컴퓨팅 장치는 제1 후보 의료 영상에 대응되는 등급 및 제2 후보 의료 영상에 대응되는 등급 사이에 상기 타깃 등급이 포함되는 경우, 제1 후보 의료 영상의 등급 및 제2 후보 의료 영상의 등급 사이의 간격이 미리 설정된 소정의 간격보다 작은지 여부를 판단할 수 있다. 컴퓨팅 장치는 제1 후보 의료 영상의 등급 및 제2 후보 의료 영상의 등급 사이의 간격이 미리 설정된 소정의 간격보다 작거나 같은 경우, 이후 단계(830)를 수행할 수 있다. 컴퓨팅 장치는 제1 후보 의료 영상의 등급 및 제2 후보 의료 영상의 등급 사이의 간격이 미리 설정된 소정의 간격보다 큰 경우, 해당 간격이 미리 설정된 소정의 간격보다 작거나 같도록 제1 후보 의료 영상 및 제2 후보 의료 영상을 다시 획득하는 과정을 수행할 수 있다. 이는 타깃 등급에 대응되는 의료 영상을 생성하는 과정에서, 타깃 등급에 가까운 등급에 대응하는 의료 영상을 이용하여 산출 결과의 정확도를 향상시키기 위함이다.In operation 820 , the computing device may acquire a first candidate medical image and a second candidate medical image for generating a medical image corresponding to the target class information. When the target grade is included between the grade corresponding to the first candidate medical image and the grade corresponding to the second candidate medical image, the computing device is configured to provide an interval between the grade of the first candidate medical image and the grade of the second candidate medical image. It can be determined whether the interval is smaller than a predetermined interval. When the interval between the grade of the first candidate medical image and the grade of the second candidate medical image is less than or equal to a predetermined interval, the computing device may perform a subsequent operation 830 . If the interval between the grade of the first candidate medical image and the grade of the second candidate medical image is greater than a predetermined interval, the computing device may generate the first candidate medical image and the second medical image so that the interval is less than or equal to the predetermined interval. A process of re-acquiring the second candidate medical image may be performed. This is to improve accuracy of a calculation result by using a medical image corresponding to a grade close to the target grade in the process of generating a medical image corresponding to the target grade.
컴퓨팅 장치는 단계(830)에서 미리 학습된 제1' 판독 모델에 기초하여, 상기 제1 후보 의료 영상 및 제2 후보 의료 영상 각각에 대한 제1' 특징 벡터 및 제2' 특징 벡터를 획득할 수 있다. 제1' 판독 모델은 앞서 설명한 바와 같이, 입력 의료 영상에서 특징 벡터를 추출하고, 이에 기초하여 입력 의료 영상의 등급을 판독하도록 학습된 인공 신경망 일 수 있다.The computing device may obtain a first' feature vector and a second' feature vector for each of the first candidate medical image and the second candidate medical image, based on the pre-trained first 'reading model in step 830 . have. As described above, the first reading model may be an artificial neural network trained to extract a feature vector from an input medical image and read a grade of the input medical image based thereon.
제1' 판독 모델은 앞서 설명된 바와 같이 가공된 학습 의료 영상(기 존재하는 학습 영상과 더불어, 기존 학습 영상의 합성에 기초하여 생성된 합성 학습 영상을 포함)에 기초하여, 입력 의료 영상의 특징 벡터를 추출하고, 추출된 특징 벡터에 기초하여 입력 의료 영상의 등급을 산출하도록 학습된 것일 수 있다. 대응되는 등급이 레이블링된 의료 영상을 학습데이터로 이용하여, 제1' 판독 모델이 의료 영상의 등급을 판독하도록 학습되는 방식은 통상의 기술자가 용이하게 이해할 수 있을 것이다. 본 발명에서 사용되는 제1' 판독 모델은 판독의 정확도를 향상시키기 위하여 가공된 학습 의료 영상을 이용하여 학습될 수 있다. 구체적으로, 가공된 학습 의료 영상은, 기존의 학습 의료 영상과 더불어, 학습 의료 영상 내에서 획득한 제1 학습 의료 영상 및 제2 학습 의료 영상을 합성하여 생성되는 제3 학습 의료 영상을 더 포함할 수 있다. 제3 학습 의료 영상은 제1 학습 의료 영상에 대응되는 제1 등급 및 제2 학습 의료 영상에 대응되는 제2 등급 사이의 등급에 대응되는 제3 등급에 대응될 수 있다. 가공된 학습 데이터는 기존 학습 데이터에 비해 개별 등급에 대응되는 요소들을 빠짐없이 포함할 수 있다. 본원 발명은 가공된 학습 데이터를 이용하여 학습된 제3 판독 모델을 통해 보다 정확도 높은 판독 결과를 제공할 수 있다.The first 'reading model is based on the training medical image processed as described above (including the synthesized training image generated based on the synthesis of the existing training image along with the existing training image), the characteristics of the input medical image It may be learned to extract a vector and calculate a grade of an input medical image based on the extracted feature vector. A person skilled in the art can easily understand how the first 'reading model is trained to read the grade of the medical image by using the medical image labeled with the corresponding grade as training data. The first 'reading model used in the present invention may be trained using a processed learning medical image to improve the accuracy of reading. Specifically, the processed learning medical image may further include a third learning medical image generated by synthesizing the first medical learning image and the second learning medical image obtained in the learning medical image together with the existing medical learning image. can The third medical training image may correspond to a third grade corresponding to a grade between a first grade corresponding to the first training medical image and a second grade corresponding to the second training medical image. The processed learning data may include all elements corresponding to individual grades compared to the existing learning data. The present invention may provide a more accurate reading result through the third reading model learned using the processed training data.
컴퓨팅 장치는 단계(840)에서 제1' 특징 벡터 및 상기 제2' 특징 벡터를 합성하여 상기 타깃 등급에 대응되는 제3' 특징 벡터를 생성할 수 있다. 컴퓨팅 장치는 제1' 특징 벡터 및 제2' 특징 벡터를 보간(interpolation)함으로써, 제3' 특징 벡터를 생성할 수 있으며, 제3' 특징 벡터를 생성하는 방식은 앞서 설명된 수학식 4 내지 수학식 6에 설명된 바와 같다.The computing device may generate a third' feature vector corresponding to the target class by synthesizing the first' feature vector and the second' feature vector in operation 840 . The computing device may generate a third' feature vector by interpolating the first' feature vector and the second' feature vector, and the method of generating the third' feature vector is based on Equations 4 through Equations 4 to Equation 4 described above. As described in Equation 6.
컴퓨팅 장치는 단계(850)에서 제3' 특징 벡터에 기초하여, 상기 타깃 등급에 대응되는 타깃 의료 영상을 생성할 수 있다.In operation 850 , the computing device may generate a target medical image corresponding to the target class based on the 3' feature vector.
위 실시 예의 설명에 기초하여 해당 기술분야의 통상의 기술자는, 본 발명의 방법 및/또는 프로세스들, 그리고 그 단계들이 하드웨어, 소프트웨어 또는 특정 용례에 적합한 하드웨어 및 소프트웨어의 임의의 조합으로 실현될 수 있다는 점을 명확하게 이해할 수 있다. 상기 하드웨어는 범용 컴퓨터 및/또는 전용 컴퓨팅 장치 또는 특정 컴퓨팅 장치 또는 특정 컴퓨팅 장치의 특별한 모습 또는 구성요소를 포함할 수 있다. 상기 프로세스들은 내부 및/또는 외부 메모리를 가지는, 하나 이상의 마이크로프로세서, 마이크로컨트롤러, 임베디드 마이크로컨트롤러, 프로그래머블 디지털 신호 프로세서 또는 기타 프로그래머블 장치에 의하여 실현될 수 있다. 게다가, 혹은 대안으로서, 상기 프로세스들은 주문형 집적회로(application specific integrated circuit; ASIC), 프로그래머블 게이트 어레이(programmable gate array), 프로그래머블 어레이 로직(Programmable Array Logic; PAL) 또는 전자 신호들을 처리하기 위해 구성될 수 있는 임의의 다른 장치 또는 장치들의 조합으로 실시될 수 있다. 더욱이 본 발명의 기술적 해법의 대상물 또는 선행 기술들에 기여하는 부분들은 다양한 컴퓨터 구성요소를 통하여 수행될 수 있는 프로그램 명령어의 형태로 구현되어 기계 판독 가능한 기록 매체에 기록될 수 있다. 상기 기계 판독 가능한 기록 매체는 프로그램 명령어, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 상기 기계 판독 가능한 기록 매체에 기록되는 프로그램 명령어는 본 발명을 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 분야의 통상의 기술자에게 공지되어 사용 가능한 것일 수도 있다. 기계 판독 가능한 기록 매체의 예에는, 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체, CD-ROM, DVD, Blu-ray와 같은 광기록 매체, 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical media), 및 ROM, RAM, 플래시 메모리 등과 같은 프로그램 명령어를 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 프로그램 명령어의 예에는, 전술한 장치들 중 어느 하나뿐만 아니라 프로세서, 프로세서 아키텍처 또는 상이한 하드웨어 및 소프트웨어의 조합들의 이종 조합, 또는 다른 어떤 프로그램 명령어들을 실행할 수 있는 기계 상에서 실행되기 위하여 저장 및 컴파일 또는 인터프리트될 수 있는, C와 같은 구조적 프로그래밍 언어, C++ 같은 객체지향적 프로그래밍 언어 또는 고급 또는 저급 프로그래밍 언어(어셈블리어, 하드웨어 기술 언어들 및 데이터베이스 프로그래밍 언어 및 기술들)를 사용하여 만들어질 수 있는바, 기계어 코드, 바이트코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드도 이에 포함된다. Based on the description of the above embodiments, those skilled in the art will recognize that the method and/or processes of the present invention, and the steps thereof, may be implemented in hardware, software, or any combination of hardware and software suitable for a particular application. point can be clearly understood. The hardware may include general purpose computers and/or dedicated computing devices or specific computing devices or special features or components of specific computing devices. The processes may be realized by one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable devices, having internal and/or external memory. Additionally, or alternatively, the processes may be configured to process an application specific integrated circuit (ASIC), programmable gate array, programmable array logic (PAL) or electronic signals. It may be implemented with any other device or combination of devices. Moreover, the objects of the technical solution of the present invention or parts contributing to the prior arts may be implemented in the form of program instructions that can be executed through various computer components and recorded in a machine-readable recording medium. The machine-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination. The program instructions recorded on the machine-readable recording medium may be specially designed and configured for the present invention, or may be known and available to those skilled in the art of computer software. Examples of the machine-readable recording medium include a hard disk, a magnetic medium such as a floppy disk and a magnetic tape, an optical recording medium such as CD-ROM, DVD, Blu-ray, and a magneto-optical medium such as a floppy disk (magneto-optical media), and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like. Examples of program instructions include any one of the devices described above, as well as heterogeneous combinations of processors, processor architectures or combinations of different hardware and software, or stored and compiled or interpreted for execution on a machine capable of executing any other program instructions. can be created using a structured programming language such as C, an object-oriented programming language such as C++, or a high-level or low-level programming language This includes not only bytecode, but also high-level language code that can be executed by a computer using an interpreter or the like.
따라서 본 발명에 따른 일 태양에서는, 앞서 설명된 방법 및 그 조합들이 하나 이상의 컴퓨팅 장치들에 의하여 수행될 때, 그 방법 및 방법의 조합들이 각 단계들을 수행하는 실행 가능한 코드로서 실시될 수 있다. 다른 일 태양에서는, 상기 방법은 상기 단계들을 수행하는 시스템들로서 실시될 수 있고, 방법들은 장치들에 걸쳐 여러 가지 방법으로 분산되거나 모든 기능들이 하나의 전용, 독립형 장치 또는 다른 하드웨어에 통합될 수 있다. 또 다른 일 태양에서는, 위에서 설명한 프로세스들과 연관된 단계들을 수행하는 수단들은 앞서 설명한 임의의 하드웨어 및/또는 소프트웨어를 포함할 수 있다. 그러한 모든 순차 결합 및 조합들은 본 개시서의 범위 내에 속하도록 의도된 것이다.Accordingly, in one aspect according to the present invention, when the method and combinations thereof described above are performed by one or more computing devices, the methods and combinations of methods may be implemented as executable code for performing respective steps. In another aspect, the method may be implemented as systems that perform the steps, the methods may be distributed in various ways across devices or all functions may be integrated into one dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such sequential combinations and combinations are intended to fall within the scope of this disclosure.
예를 들어, 상기 하드웨어 장치는 본 발명에 따른 처리를 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있으며, 그 역도 마찬가지이다. 상기 하드웨어 장치는, 프로그램 명령어를 저장하기 위한 ROM/RAM 등과 같은 메모리와 결합되고 상기 메모리에 저장된 명령어들을 실행하도록 구성되는 MPU, CPU, GPU, TPU와 같은 프로세서를 포함할 수 있으며, 외부 장치와 신호를 주고 받을 수 있는 통신부를 포함할 수 있다. 덧붙여, 상기 하드웨어 장치는 개발자들에 의하여 작성된 명령어들을 전달받기 위한 키보드, 마우스, 기타 외부 입력장치를 포함할 수 있다.For example, the hardware device may be configured to operate as one or more software modules to perform processing in accordance with the present invention, and vice versa. The hardware device may include a processor such as an MPU, CPU, GPU, TPU coupled with a memory such as ROM/RAM for storing program instructions and configured to execute instructions stored in the memory, and an external device and signal It may include a communication unit that can send and receive. In addition, the hardware device may include a keyboard, a mouse, and other external input devices for receiving commands written by developers.
이상에서 본 발명이 구체적인 구성요소 등과 같은 특정 사항들과 한정된 실시 예 및 도면에 의해 설명되었으나, 이는 본 발명의 보다 전반적인 이해를 돕기 위해서 제공된 것일 뿐, 본 발명이 상기 실시 예들에 한정되는 것은 아니며, 본 발명이 속하는 기술분야에서 통상적인 지식을 가진 사람이라면 이러한 기재로부터 다양한 수정 및 변형을 꾀할 수 있다.In the above, the present invention has been described with specific matters such as specific components and limited embodiments and drawings, but these are only provided to help a more general understanding of the present invention, and the present invention is not limited to the above embodiments, Those of ordinary skill in the art to which the present invention pertains can make various modifications and variations from these descriptions.
따라서, 본 발명의 사상은 상기 설명된 실시 예에 국한되어 정해져서는 아니되며, 후술하는 특허청구범위뿐만 아니라 이 특허청구범위와 균등하게 또는 등가적으로 변형된 모든 것들은 본 발명의 사상의 범주에 속한다고 할 것이다.Therefore, the spirit of the present invention should not be limited to the above-described embodiments, and not only the claims described below, but also all modifications equivalently or equivalently to the claims described below belong to the scope of the spirit of the present invention. will do it
그와 같이 균등하게 또는 등가적으로 변형된 것에는, 예컨대 본 발명에 따른 방법을 실시한 것과 동일한 결과를 낼 수 있는, 논리적으로 동치(logically equivalent)인 방법이 포함될 것인바, 본 발명의 진의 및 범위는 전술한 예시들에 의하여 제한되어서는 아니되며, 법률에 의하여 허용 가능한 가장 넓은 의미로 이해되어야 한다.Such equivalent or equivalent modifications shall include, for example, logically equivalent methods capable of producing the same results as practiced by the methods according to the present invention, the spirit and scope of the present invention. should not be limited by the above examples, and should be understood in the broadest sense permitted by law.

Claims (13)

  1. 컴퓨팅 장치에 의해 수행되는 타깃 연령에 대응되는 골 영상 생성 방법에 있어서,In the method for generating a bone image corresponding to a target age performed by a computing device,
    입력된 타깃 골 연령에 대응되는 골 영상을 생성하기 위한 제1 후보 골 영상 및 제2 후보 골 영상을 획득하는 단계;acquiring a first candidate bone image and a second candidate bone image for generating a bone image corresponding to the input target bone age;
    미리 학습된 제1 판독 모델에 기초하여, 상기 제1 후보 골 영상 및 제2 후보 골 영상 각각에 대한 제1 특징 벡터 및 제2 특징 벡터를 추출하는 단계;extracting a first feature vector and a second feature vector for each of the first candidate bone image and the second candidate bone image based on the pre-trained first read model;
    상기 제1 특징 벡터 및 상기 제2 특징 벡터를 합성하여 상기 타깃 골 연령에 대응되는 제3 특징 벡터를 생성하는 단계; 및generating a third feature vector corresponding to the target bone age by synthesizing the first feature vector and the second feature vector; and
    미리 학습된 제2 판독 모델에 상기 제3 특징 벡터를 입력하여 상기 타깃 골 연령에 대응되는 타깃 골 영상을 생성하는 단계generating a target bone image corresponding to the target bone age by inputting the third feature vector into a pre-trained second reading model
    를 포함하는, 골 영상 생성 방법.Containing, bone image generation method.
  2. 제1항에 있어서,According to claim 1,
    상기 제1 판독 모델은,The first reading model is
    대응되는 연령이 레이블링된 학습 골 영상에 기초하여, 입력 골 영상에 대한 특징 벡터를 추출하고, 추출한 특징 벡터에 기초하여 상기 입력 골 영상에 대응되는 골 연령을 판독하도록 학습되고,Based on the training bone image labeled with the corresponding age, extracting a feature vector for the input bone image, and learning to read the bone age corresponding to the input bone image based on the extracted feature vector,
    상기 제1 판독 모델은,The first reading model is
    기 존재하는 둘 이상의 상기 학습 골 영상이 합성된 합성 학습 골 영상에 기초하여 추가적으로 학습되고,It is additionally learned based on the synthesized learning goal image in which two or more of the existing learning goal images are synthesized,
    상기 합성 학습 골 영상은,The synthetic learning goal image is,
    상기 학습 골 영상에 존재하지 않는 연령에 대응되는, 골 영상 생성 방법.Corresponding to the age that does not exist in the learning bone image, bone image generation method.
  3. 제1항에 있어서,According to claim 1,
    상기 제3 특징 벡터를 생성하는 단계는,The step of generating the third feature vector comprises:
    상기 제1 특징 벡터 및 상기 제2 특징 벡터를 보간(interpolation)함으로써, 상기 제3 특징 벡터를 생성하는, 골 영상 생성 방법.The method of generating a bone image, generating the third feature vector by interpolating the first feature vector and the second feature vector.
  4. 제3항에 있어서,4. The method of claim 3,
    상기 제3 특징 벡터를 생성하는 단계는,The step of generating the third feature vector comprises:
    제1 가중치가 부여된, 상기 제1 후보 골 영상의 골 연령에 대응되는 제1 레이블과, 제2 가중치가 부여된, 상기 제2 후보 골 영상의 골 연령에 대응되는 제2 레이블의 합을 통해 생성되는 제3 레이블이 상기 타깃 골 연령에 대응되도록 상기 제1 가중치 및 상기 제2 가중치를 결정하고,Through the sum of a first label corresponding to the bone age of the first candidate bone image to which a first weight has been given and a second label corresponding to the bone age of the second candidate bone image to which a second weight has been given determining the first weight and the second weight so that the generated third label corresponds to the target bone age;
    상기 제1 가중치가 부여된 상기 제1 특징 벡터 및 상기 제2 가중치가 부여된 상기 제2 특징 벡터의 합에 기초하여 상기 제3 특징 벡터를 생성하는, 골 영상 생성 방법.and generating the third feature vector based on a sum of the first feature vector to which the first weight is assigned and the second feature vector to which the second weight is assigned.
  5. 제1항에 있어서,According to claim 1,
    상기 제3 특징 벡터를 생성하는 단계는,The step of generating the third feature vector comprises:
    상기 제1 후보 골 영상의 제1 골 연령 및 제2 후보 골 영상의 제2 골 연령 사이의 간격은 미리 설정된 소정의 간격보다 작거나 같도록 선택되는, 골 영상 생성 방법.and an interval between the first bone age of the first candidate bone image and the second bone age of the second candidate bone image is selected to be less than or equal to a predetermined interval.
  6. 제1항에 있어서,According to claim 1,
    상기 타깃 골 영상을 생성하는 단계는,The step of generating the target bone image comprises:
    상기 제3 특징 벡터와 더불어, 상기 제1 후보 골 영상 또는 상기 제2 후보 골 영상 중 어느 하나에 기초하여 상기 타깃 골 영상을 생성하는, 골 영상 생성 방법.In addition to the third feature vector, the bone image generating method of generating the target bone image based on any one of the first candidate bone image and the second candidate bone image.
  7. 컴퓨팅 장치로 하여금, 제1항의 방법을 수행하도록 구현된 명령어(instructions)를 포함하는, 기계 판독 가능한 비일시적 기록 매체에 저장된, 컴퓨터 프로그램.A computer program, stored in a machine-readable non-transitory recording medium, comprising instructions implemented to cause a computing device to perform the method of claim 1 .
  8. 컴퓨팅 장치에 의해 수행되는 의료 영상 생성 방법에 있어서,A method for generating a medical image performed by a computing device, the method comprising:
    제1 후보 의료 영상 및 제2 후보 의료 영상 각각으로부터 제1 특징 벡터 및 제2 특징 벡터를 획득하는 단계;obtaining a first feature vector and a second feature vector from each of the first candidate medical image and the second candidate medical image;
    상기 획득된 제1 특징 벡터와 상기 제2 특징 벡터를 합성하여 타깃 등급에 대응되는 제3 특징 벡터를 생성하되, 상기 타깃 등급은 상기 제1 후보 의료 영상에 대응되는 제1 등급과 상기 제2 후보 의료 영상에 대응되는 제2 등급 사이에 위치하는 단계; 및A third feature vector corresponding to a target class is generated by synthesizing the obtained first feature vector and the second feature vector, wherein the target class is a first class corresponding to the first candidate medical image and the second candidate locating between second grades corresponding to medical images; and
    상기 제3 특징 벡터에 기초하여 상기 타깃 등급에 대응되는 타깃 의료 영상을 생성하는 단계generating a target medical image corresponding to the target class based on the third feature vector;
    를 포함하는, 의료 영상 생성 방법.Including, a method of generating a medical image.
  9. 제8항에 있어서,9. The method of claim 8,
    상기 제3 특징 벡터를 생성하는 단계는,The step of generating the third feature vector comprises:
    상기 제1 후보 의료 영상의 등급에 대응되는 제1 레이블과, 상기 제2 후보 의료 영상의 등급에 대응되는 제2 레이블의 가중 합에 기초하여 생성되는 제3 레이블이 상기 타깃 등급에 대응되도록 상기 제1 레이블에 대응되는 제1 가중치 및 상기 제2 레이블에 대응되는 제2 가중치를 결정하고,The third label generated based on a weighted sum of a first label corresponding to the grade of the first candidate medical image and a second label corresponding to the grade of the second candidate medical image corresponds to the target grade. determining a first weight corresponding to the first label and a second weight corresponding to the second label;
    상기 제1 가중치가 부여된 상기 제1 특징 벡터 및 상기 제2 가중치가 부여된 제2 특징 벡터의 합에 기초하여 상기 제3 특징 벡터를 생성하는, 의료 영상 생성 방법.and generating the third feature vector based on a sum of the first feature vector to which the first weight is assigned and the second feature vector to which the second weight is assigned.
  10. 제8항에 있어서,9. The method of claim 8,
    상기 제3 특징 벡터를 생성하는 단계는,The step of generating the third feature vector comprises:
    상기 제1 등급 및 제2 등급 사이의 간격은 미리 설정된 소정의 간격보다 작거나 같도록 선택되는, 의료 영상 생성 방법.and an interval between the first grade and the second grade is selected to be less than or equal to a predetermined interval.
  11. 제8항에 있어서,9. The method of claim 8,
    상기 타깃 의료 영상을 생성하는 단계는,The step of generating the target medical image includes:
    상기 제3 특징 벡터와 더불어, 상기 제1 후보 의료 영상 또는 상기 제2 후보 영상 중 어느 하나에 기초하여 상기 타깃 의료 영상을 생성하는, 의료 영상 생성 방법.and generating the target medical image based on any one of the first candidate medical image and the second candidate image in addition to the third feature vector.
  12. 컴퓨팅 장치로 하여금, 제8항의 방법을 수행하도록 구현된 명령어(instructions)를 포함하는, 기계 판독 가능한 비일시적 기록 매체에 저장된, 컴퓨터 프로그램.A computer program, stored in a machine-readable non-transitory recording medium, comprising instructions implemented to cause a computing device to perform the method of claim 8 .
  13. 의료 영상을 생성하는 컴퓨팅 장치에 있어서,A computing device for generating a medical image, comprising:
    통신부; 및communication department; and
    프로세서processor
    를 포함하고, including,
    상기 프로세서는,The processor is
    입력된 타깃 등급에 대응되는 의료 영상을 생성하기 위해 획득된 제1 후보 의료 영상 및 제2 후보 의료 영상에 각각으로부터 획득한 제1 특징 벡터 및 제2 특징 벡터를 합성하여 상기 타깃 등급에 대응되는 제3 특징 벡터를 생성하고,In order to generate a medical image corresponding to the input target class, first and second feature vectors obtained from the obtained first candidate medical image and the second candidate medical image are synthesized, respectively, to generate a first medical image corresponding to the target class. 3 Create a feature vector,
    상기 제3 특징 벡터에 기초하여 상기 타깃 등급에 대응되는 타깃 의료 영상을 생성하는, 컴퓨팅 장치.and generating a target medical image corresponding to the target class based on the third feature vector.
PCT/KR2020/015487 2019-12-09 2020-11-06 Medical image generation method and device using same WO2021118068A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2019-0162678 2019-12-09
KR1020190162678A KR102177567B1 (en) 2019-12-09 2019-12-09 Method for generating bone image and apparatus using the same
KR1020200146676A KR102556646B1 (en) 2020-11-05 2020-11-05 Method and apparatus for generating medical image
KR10-2020-0146676 2020-11-05

Publications (1)

Publication Number Publication Date
WO2021118068A1 true WO2021118068A1 (en) 2021-06-17

Family

ID=76329004

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/015487 WO2021118068A1 (en) 2019-12-09 2020-11-06 Medical image generation method and device using same

Country Status (1)

Country Link
WO (1) WO2021118068A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003175023A (en) * 2001-09-18 2003-06-24 Agfa Gevaert Nv Radiographic scoring method
KR100864434B1 (en) * 2007-10-05 2008-10-20 길호석 An apparatus for measuring bone age
KR20170096088A (en) * 2016-02-15 2017-08-23 삼성전자주식회사 Image processing apparatus, image processing method thereof and recording medium
KR101898575B1 (en) * 2018-01-18 2018-09-13 주식회사 뷰노 Method for predicting future state of progressive lesion and apparatus using the same
KR102045223B1 (en) * 2019-05-03 2019-11-19 주식회사 크레스콤 Apparatus, method and computer program for analyzing bone age

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003175023A (en) * 2001-09-18 2003-06-24 Agfa Gevaert Nv Radiographic scoring method
KR100864434B1 (en) * 2007-10-05 2008-10-20 길호석 An apparatus for measuring bone age
KR20170096088A (en) * 2016-02-15 2017-08-23 삼성전자주식회사 Image processing apparatus, image processing method thereof and recording medium
KR101898575B1 (en) * 2018-01-18 2018-09-13 주식회사 뷰노 Method for predicting future state of progressive lesion and apparatus using the same
KR102045223B1 (en) * 2019-05-03 2019-11-19 주식회사 크레스콤 Apparatus, method and computer program for analyzing bone age

Similar Documents

Publication Publication Date Title
KR101898575B1 (en) Method for predicting future state of progressive lesion and apparatus using the same
WO2019132168A1 (en) System for learning surgical image data
WO2019143177A1 (en) Method for reconstructing series of slice images and apparatus using same
WO2019103440A1 (en) Method for supporting reading of medical image of subject and device using same
US8620689B2 (en) System and method for patient synchronization between independent applications in a distributed environment
WO2016126056A1 (en) Medical information providing apparatus and medical information providing method
WO2016125978A1 (en) Method and apparatus for displaying medical image
WO2020231007A2 (en) Medical equipment learning system
WO2019143179A1 (en) Method for automatically detecting same regions of interest between images of same object taken with temporal interval, and apparatus using same
WO2021034138A1 (en) Dementia evaluation method and apparatus using same
WO2021137454A1 (en) Artificial intelligence-based method and system for analyzing user medical information
WO2019143021A1 (en) Method for supporting viewing of images and apparatus using same
WO2019124836A1 (en) Method for mapping region of interest of first medical image onto second medical image, and device using same
WO2022131642A1 (en) Apparatus and method for determining disease severity on basis of medical images
KR101919908B1 (en) Method for facilitating labeling of medical image and apparatus using the same
KR102108418B1 (en) Method for providing an image based on a reconstructed image group and an apparatus using the same
WO2013100319A1 (en) Device and method for managing medical image data by using reference coordinates
WO2021054700A1 (en) Method for providing tooth lesion information, and device using same
WO2021206518A1 (en) Method and system for analyzing surgical procedure after surgery
KR102290799B1 (en) Method for providing tooth leison information and apparatus using the same
WO2021118068A1 (en) Medical image generation method and device using same
WO2023027248A1 (en) Data generation method, and training method and apparatus using same
KR102222816B1 (en) Method for generating future image of progressive lesion and apparatus using the same
WO2019164273A1 (en) Method and device for predicting surgery time on basis of surgery image
WO2022050713A1 (en) Method for reading chest image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20899636

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20899636

Country of ref document: EP

Kind code of ref document: A1