WO2022255674A1 - Exponible biometric image reading support device and method - Google Patents

Exponible biometric image reading support device and method Download PDF

Info

Publication number
WO2022255674A1
WO2022255674A1 PCT/KR2022/006536 KR2022006536W WO2022255674A1 WO 2022255674 A1 WO2022255674 A1 WO 2022255674A1 KR 2022006536 W KR2022006536 W KR 2022006536W WO 2022255674 A1 WO2022255674 A1 WO 2022255674A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
attribute information
biometric image
style
machine learning
Prior art date
Application number
PCT/KR2022/006536
Other languages
French (fr)
Korean (ko)
Inventor
박상민
Original Assignee
자이메드 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 자이메드 주식회사 filed Critical 자이메드 주식회사
Publication of WO2022255674A1 publication Critical patent/WO2022255674A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • G06F3/04855Interaction with scrollbars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the present application relates to an explanatory reading support method and an apparatus and system using the method in reading a biometric image in the medical field.
  • learning models such as convolutional neural networks, deep neural networks, recurrent neural networks, and deep belief networks can be used to detect, classify, and feature medical image images. It is applied to learning.
  • a machine learning model is used to support image reading (finding/diagnosis) in order to predict a test subject's disease. More specifically, a medical image of a subject is acquired, attribute information is extracted based on a machine learning model, provided to a diagnoser, and a disease is predicted based on the attribute information. At this time, the attribute information includes various information included in the medical image.
  • differences in learning information include lack of learning data input to the learning model, differences in imaging environments (e.g., health checkup center, private eye hospital, general hospital specializing in ophthalmology), and group (e.g., only normal people, only abnormal people). It may be due to the difference between normal and abnormal people), differences in imaging equipment, and the like. These various factors cause medical staff to mispredict the disease of the test subject.
  • Embodiments of the present invention are intended to provide a biometric image reading support method and apparatus capable of accurately and reliably reading a biometric image based on a machine learning model.
  • An embodiment of the present invention is to provide a biometric image read support method and apparatus capable of explaining the reason for reading the biometric image.
  • a processor and a memory including one or more instructions implemented to be executed by the processor, wherein the processor extracts first attribute information from a first biological image based on a first machine learning model; A third biometric image having second attribute information comparable to the first attribute information is generated by modifying a style of a second biometric image paired with the first biometric image based on a machine learning model, and the first attribute information The first biological image having information and the third biological image having the second attribute information are displayed on the display unit.
  • the second machine learning model includes a first sub machine learning model and a second sub machine learning model, and based on the first sub machine learning model, the style of the second biometric image is modified to be generated as an adversarial style image.
  • the style may be modified by mapping hostile noise to the second biological image.
  • the hostile noise may control values for adjusting gradation levels of R, G, and B pixels of the second living body image, and changes in color of R, G, and B pixels of the second living body image. value, and a value for adjusting the contrast ratio of the second living body image.
  • the second sub-machine learning model may generate the third biological image by fusing the hostile style image based on the second biological image.
  • the screen displayed on the display unit may include a biometric image screen for displaying the third biological image and an image style adjusting screen for displaying a scroll bar for adjusting the style of the second biological image.
  • FIG. 1 is a diagram exemplarily illustrating a process of generating attribute information of a biometric image by a biometric image reading device according to an embodiment of the present invention.
  • FIG. 2A is a diagram exemplarily illustrating a process of generating a plurality of adversarial style images by a biometric image reading device according to an embodiment of the present invention.
  • 2B is a diagram illustratively illustrating a process of generating a single adversarial style image according to predicted values by a biometric image reading device according to an embodiment of the present invention.
  • FIG. 3 exemplarily illustrates a process of generating a third living body image to be compared with a first living body image by a living body image reading device according to an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating a second biological image generated by a biometric image reading device and an adversarial style according to an exemplary embodiment of the present invention.
  • FIG. 5 is a diagram schematically illustrating an exemplary configuration of an apparatus for supporting reading of a biometric image that performs a method of supporting reading of a biometric image according to an embodiment of the present invention.
  • FIG. 6 is a diagram schematically illustrating a biometric image reading support system that performs a method of supporting reading of a biometric image according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating a screen displayed on a display unit of an apparatus for supporting reading of a biometric image according to an embodiment of the present invention.
  • FIG. 8 is a flowchart exemplarily illustrating a biometric image read support method according to an embodiment of the present invention.
  • image used in the detailed description and claims of the present invention can be defined as a digital copy or imitation of the shape or specific characteristics of a person or object, and the image is a JPEG image, PNG image, GIF image, It may be, but is not limited to, a TIFF image or any other digital image format known in the art. Also, “image” may be used in the same sense as "photograph”.
  • the "attribute” used in the detailed description of the present invention and claims may be defined as a group of one or more descriptive characteristics of a subject capable of distinguishing a disease (lesion), and the “attribute” may be expressed as a numerical characteristic. .
  • the apparatus, method, system, and device disclosed in the present invention may be applied to and used for medical images of fundus and osteoarthritis structures or any other biological tissue images capable of supporting the diagnosis of disease states, but are not limited thereto, and computed tomography scans.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • computed radiography magnetic resonance
  • angioscopy optical coherence tomography
  • color flow Doppler cystoscopy
  • cystoscopy diaphanography
  • echocardiography fluorescein angiography ( fluoresosin angiography), laparoscopy, magnetic resonance angiography, positron emission tomography, single photon emission computed tomography, X-ray angiography, nuclear medicine, biomagnetic imaging, culposcopy, double Doppler, digital microscopy, endoscopy , laser, surface scanning, magnetic resonance spectroscopy, radiographic imaging, thermal imaging and radiofluorescence.
  • CT computed tomography scans.
  • MRI magnetic
  • FIG. 1 is a diagram exemplarily illustrating a process of generating attribute information of a biometric image by a biometric image reading device according to an embodiment of the present invention.
  • the attribute information of the derived biometric image has reliability and can assist medical staff in accurately reading the biometric image.
  • Image attribute information can be automatically generated. That is, the biometric image reading support apparatus 100 according to the present invention finds attribute information from the first biological image 1 primarily photographed through a camera or the like with respect to the biological tissue of an object, and secondarily acquires attribute information. It may be helpful in determining or explaining whether the first biological image 1 of the target object is an actual reliable biological image.
  • the biometric image reading support apparatus 100 uses a first machine learning model (Machine Learning Model, 3) from the captured first biometric image (1) to obtain a first biometric image.
  • the first attribute information 5 of the biometric image 1 is extracted, and the extracted first attribute information 5 of the first biometric image 1 is stored in a system memory unit 113 or a storage device 115 to be described later.
  • the first machine learning model 3 may be input to the processor 111 and executed, or may be input to a computer readable recording medium (not shown) and executed. Also, the machine learning model may be input to the memory unit 113 or the storage device 115 and operated by the processor 111 to be performed.
  • the biometric image reading support apparatus 100 may additionally store clinical information of an object, eg, an object (patient), in the memory unit 113 or the storage device 115, which will be described later.
  • the processor 111 may extract the first attribute information 5 of the first biological image 1 based on a machine learning model by utilizing clinical information of an object stored in a memory or a storage device.
  • Clinical information may include, but is not limited to, age, gender, medical history, medical examination information, test measurement values, exercise habits, eating habits, family history related to the above medical history, alcohol consumption and smoking status of the subject (patient).
  • the medical history information may include a neurological medical examination that a doctor may perform on a patient, and may mean an ideal finding currently observed, unlike a medical history.
  • the test measurement value for example, a value obtained by measuring intraocular pressure, blood pressure, blood glucose level, and the like may be considered.
  • the first attribute information 5 is information that can be supported by an entity such as a medical staff in predicting and diagnosing a disease.
  • the first attribution information 5 is information on an increased C/D ratio (Cup-to-Disc ratio) included in the fundus image, disc rim thinning information, information on retinal nerve fiber layer defect, information on retinal hemorrhage, etc., and these information may be tissue position, thickness, shape, pixel information It is not limited thereto, and any information capable of identifying the attributes of each tissue may be included. Based on this tissue information, the device 100 may predict and diagnose glaucoma in the fundus image of the subject.
  • the first attribute information 5 includes joint space narrowing, osteophyte, sclerosis, and bone end defects included in the bone joint image. deformity) and the like.
  • the second biometric image 10 is an image paired with the first biometric image 1, and any image of which the first biometric image 1 can be a target and which can be the basis of the first biometric image 10 All can be used, the style of the second biometric image 10 is modified based on the second machine learning model 30, and the third biometric image 50 can be generated by the processor 111. .
  • the second machine learning model 30 may include a first sub machine learning model 30a and a second sub machine learning model 30b.
  • the style of the second biological image 10 is transformed by the processor 111 based on the first sub-machine learning model 30a, and accordingly, a plurality of adversarial style images from the second biological image 10 (Adversarial Style image) , 10a) can be generated.
  • the generated hostile style image 10a may be stored in a system memory unit 113 or a storage device 115 to be described later.
  • the first sub machine learning model 30a may be input to the processor 111 and executed, or may be input to a computer readable recording medium (not shown) and executed.
  • the first machine learning model 30a may be input to the memory unit 113 or the storage device 115 and operated by the processor 111 to be performed.
  • the hostile style image 10a is an image having second attribute information comparable to the first attribute information 1a of the first biological image 1 by mapping noise to the second biological image 10, and is a hostile style image ( 10a) The production method will be described later.
  • the second attribute information is similar to the first attribute information described above, so a description thereof will be omitted.
  • the second sub-machine learning model 30b selects one of the plurality of adversarial style images 10a and fuses the input second body image 10 to form a third body image 50. can create Accordingly, second attribute information comparable to the first attribute information 1a of the first biological image 1 may be reflected in the third biometric image 50 .
  • FIG. 2A is a diagram illustrating a process of generating a plurality of adversarial style images by a biometric image reading device according to an embodiment of the present invention
  • FIG. 2B is a diagram illustrating a biometric image reading device according to an embodiment of the present invention. It is a diagram showing the process of generating a single adversarial style image according to predicted values by exemplarily.
  • the processor 111 may generate hostile style images 10a1, 10a2, and 10a3 by mapping hostile noises 11a, 11b, and 11c to the second biological image 10. have.
  • the hostile noises 11a, 11b, and 11c are generated in a specific region of the second living image 10, for example, a Cup-to-Disc ratio region, a Disc Rim Thinning (DRT) region, and a retinal nerve fiber layer (Retinal Nervous Fiber Layer).
  • FIG. 2B it includes second attribute information 40 obtained by changing the attribute information of the second biological image by mapping adversarial noise 11 to the second biological image 10.
  • the hostile noise 11 controls the gradation level of R, G, and B pixels representing each labeled tissue (eg, retina, nerve fiber layer, cup, disc, etc.) in the second living body image 10.
  • Any element that can change the attribute information of the biometric image can be included.
  • the hostile style image 10a including the second attribution information may be generated by mapping the hostile noise once, but a prediction value appearing for the second attribution information 40 obtained based on the machine learning model It can be generated by repeatedly mapping to converge to this set value.
  • mapping when the set value is 1 and the predicted value according to the attribute information 15 of the second biometric image 10 obtained based on the machine learning model 13 is 0.58, the second biometric image After mapping the hostile noise for (10), mapping is repeated so that the predicted values (0.60, 0.66, 0.68, ?) of the hostile style image 10a obtained based on the same machine learning model converge to the set value 1.
  • hostile noise is repeatedly mapped to the second biometric image to create an adversarial style. Mapping is repeated so that the predicted values (0.56, 0.54, 0.52, ) of the image 10a converge to the set value 0.
  • the hostile noise can be mapped to the labeled tissue in the second living body image.
  • the meaning that the set value is 1 is a biometric image that is almost abnormal, and may correspond to a biometric image that has a finding that diseases (eg, glaucoma, osteoarthritis, etc.) can be predicted and diagnosed, and the set value is 0.
  • the hostile style image 10a may be repeatedly reproduced according to the number of set values and may be generated in various forms.
  • An adversarial style image can be generated by dividing into a plurality of unit setting value intervals as shown in FIG.
  • the hostile style image 10a obtained by mapping the hostile noise to the second biological image 10 based on the machine learning model is generated by the processor 111 based on the above-described second sub-machine learning model 30b.
  • the third biological image 50 is created by merging with the second biological image 10 again.
  • the generated third biometric image 50 is displayed on the display unit, or provided to an external entity through a transmission module such as a display adapter 118 or a network adapter 119 to be described later, or linked to the computing device 110 through an Internet network. It may be provided to a remote computing device or other device that becomes a device.
  • the provided third biological image 50 may reflect the second attribute information, which is a biometric image close to normal or abnormal, the style of the third biological image 50 can be freely changed. Therefore, when the entity reads the first biological image 1, it is easy to compare and read the third biological image, and it is possible to explain the result of the reading because there is a comparison standard, and accordingly, the first attribute information 1a Reliability of reading the first living body image 1 having ? can be increased.
  • FIG. 3 exemplarily illustrates a process of generating a third biological image to be compared with a first biological image by a biometric image reading device according to an embodiment of the present invention.
  • FIG. It is a diagram showing a second biometric image generated by the biometric image reading device and a hostile style by way of example.
  • the second biometric image 10 is input to the style machine learning model (Style Encoder, 31a) of the first sub machine learning model 30a, and the second biometric image 10 is encoded by a processor (not shown).
  • Style data (Intermediate Style Data) may be generated from the image 10 .
  • the generated style data may be image data.
  • the second biological image 10 is the first biological image 1 captured by the biological image reading device 100 described in FIG. 1 or a fake image similar to the first biological image 1 to generate style data. ) can be.
  • a processor (not shown) may extract a prediction value for attribute information of the style data.
  • the attribute information may be any information capable of supporting reading of the aforementioned biometric image.
  • the adversarial style image 10a may be generated by comparing the predicted value of attribute information with a desired disease prediction value.
  • the hostile style image 10a is generated in the same or similar manner as described above with reference to FIG. 2 and, as shown in FIG. 4 , may be expressed differently according to style data (eg, Style A and Style B).
  • the hostile style image 10a and the second biometric image 10 are input to the second sub-machine learning model 30b.
  • the second sub-machine learning model 30b may generate various third biological images 50 by fusing the hostile style image 10a based on the second biological image 10 .
  • the second sub-machine learning model (30b) is implemented with various learning algorithms including CycleGAN, GANMOOK, StarGAN, Real-time Style Transfer model learning algorithm based on deep learning, and Generative Adversarial Network or Style Transfer model depending on the amount of training data. It can be.
  • the second sub machine learning model 220 may calculate a loss using an identity loss function proposed by the present invention. This identity loss function can enable style conversion without damaging the structure of image data by selecting a semantically similar region during image data-style mapping.
  • FIG. 5 is a diagram schematically illustrating an exemplary configuration of an apparatus for supporting reading of a biometric image that performs a method of supporting reading of a biometric image according to an embodiment of the present invention.
  • a biometric image reading device 100 may include a computing device 110, a display device 130, and a camera 150.
  • the computing device 110 includes a processor 111, a memory unit 113, a storage device 115, an input/output interface 117, a network adapter 118, and a display adapter. (Display Adapter, 119), and a system bus (System bus, 112) that connects various system components including a processor to the memory unit 113, but is not limited thereto.
  • the biometric image reading support device may include a system bus 112 for transferring information as well as other communication mechanisms.
  • a system bus or other communication mechanism may include a processor, a memory that is a computer-readable recording medium, a short-range communication module (eg, Bluetooth or NFC), a network adapter including a network interface or mobile communication module, and a display device (eg, CRT or LCD, etc.), input devices (eg, keyboard, keypad, virtual keyboard, mouse, trackball, stylus, touch sensing means, etc.), and/or subsystems.
  • a processor e.g, a memory that is a computer-readable recording medium, a short-range communication module (eg, Bluetooth or NFC), a network adapter including a network interface or mobile communication module, and a display device (eg, CRT or LCD, etc.), input devices (eg, keyboard, keypad, virtual keyboard, mouse, trackball, stylus, touch sensing means, etc.), and/or subsystems.
  • a short-range communication module eg, Bluetooth or NFC
  • a network adapter including a network interface or mobile communication module
  • the processor 111 may be a processing module that automatically processes using the machine learning model 13, and may be a CPU, an application processor (AP), a microcontroller, etc., but is not limited thereto.
  • AP application processor
  • the processor 111 may display the operation of the biometric image reading support apparatus and the user interface on the display device 130 by communicating with the display adapter 119 , for example, a hardware controller for the display device.
  • the processor 111 controls the operation of the biometric image reading support apparatus according to an embodiment of the present invention to be described later by accessing the memory unit 113 and executing one or more sequences of commands or logic stored in the memory unit.
  • These instructions may be read into memory from static storage or other computer readable media such as a disk drive.
  • hard-wired circuitry may be used in place of or combined with software instructions to implement the present disclosure.
  • Logic may refer to any medium that participates in providing instructions to the processor and may be loaded into the memory unit 113 .
  • System bus 112 is one of several possible types of bus structures including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. indicates an abnormality.
  • these architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standard Association (VESA) local bus, Accelerated Graphics Port (AGP) Buses and Peripheral Component Interconnects (PCI), PCI-Express buses, Personal Computer Memory Card Industry Association (PCMCIA), and Universal Serial Bus (USB).
  • the system bus 112 can be implemented as a wired or wireless network connection.
  • Processor 111
  • mass storage device Mass Storage Device
  • operating system Operating System
  • 113c 115a
  • imaging software Imaging Software, 113b, 115b
  • imaging data Imaging Data, 113a, 115c
  • network A subsystem including an adapter (Network Adapter, 118), system memory, input/output interface (Input/Output Interface, 117), display adapter (119), and display device (Display Device, 130)
  • Each may be included in one or more remote computing devices (Remote Computing Devices, 200, 300, 400) to be described later at physically separate locations, and may be connected through these types of buses to efficiently run a distributed system. .
  • Transmission media including the wires of the bus may include coaxial cable, copper wire, and optical fibers.
  • transmission media may take the form of acoustic or light waves generated during radio wave communication or infrared data communication.
  • the apparatus 100 for supporting biometric image reading includes messages, data, information, and one or more programs (ie, application code) through a network link and a network adapter 118. Commands may be sent and received.
  • programs ie, application code
  • the network adapter 118 may include a separate or integrated antenna for enabling transmission and reception over a network link.
  • the network adapter 118 may access a network and communicate with the remote computing device (Remote Computing Device, 200, 300, 400).
  • the network may include, but is not limited to, at least one of a LAN, WLAN, PSTN, and cellular phone network.
  • the network adapter 118 may include at least one of a network interface and a mobile communication module for accessing the network.
  • the mobile communication module can access a mobile communication network for each generation (eg, 2G to 5G mobile communication networks).
  • the program code may be executed by the processor 111 when received and/or may be stored for execution in a disk drive of the memory unit 113 or in a non-volatile memory of a type other than the disk drive.
  • the computing device 110 may be a variety of computer readable recording media.
  • a readable medium can be any of a variety of media that can be accessed by a computing device, including, for example, volatile or non-volatile media, removable media, and non-removable media. removable media), but is not limited thereto.
  • the memory unit 113 may store an operating system, driver, application program, data, and database necessary for the operation of the device for supporting biometric image reading according to an embodiment of the present invention, but is not limited thereto.
  • the memory unit 113 may include computer readable media in the form of volatile memory such as RAM (Random Access Memory), ROM (Read Only Memory) and non-volatile memory such as flash memory, and may also include a disk drive.
  • volatile memory such as RAM (Random Access Memory), ROM (Read Only Memory) and non-volatile memory such as flash memory
  • non-volatile memory such as flash memory
  • it may include, but is not limited to, a hard disk drive, a solid state drive, an optical disk drive, and the like.
  • the memory unit 113 and the storage device 115 typically store data such as imaging data (Imaging Data, 113a, 115a) such as a biometric image of an object, which can be immediately accessed to be operated by the processor 111. It may include program modules such as imaging software 113b and 115b and operating systems 113c and 115c.
  • imaging data Imaging Data, 113a, 115a
  • program modules such as imaging software 113b and 115b and operating systems 113c and 115c.
  • the machine learning model 13 may be inserted into a processor, a memory unit 113 or a storage device 115 .
  • the machine learning model at this time may include a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), etc., which are one of the machine learning algorithms. Not limited.
  • DNN deep neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • a deep neural network is an artificial neural network (ANN) consisting of several hidden layers between an input layer and an output layer.
  • a deep neural network (DNN) can model complex non-linear relationships, just like a general artificial neural network.
  • each object may be represented as a hierarchical composition of basic elements of an image.
  • the additional layers may consolidate the characteristics of the gradually gathered lower layers. This feature of deep neural networks allows complex data to be modeled with fewer units (units, nodes) compared to similarly performed artificial neural networks.
  • a convolutional neural network is a type of multilayer perceptron designed to use minimal preprocessing.
  • a convolutional neural network (CNN) consists of one or several convolutional layers and general artificial neural network layers placed on top of them, and additionally utilizes weights and pooling layers. Thanks to this structure, convolutional neural networks (CNNs) can fully utilize input data with a two-dimensional structure. Compared to other deep learning architectures, convolutional neural networks (CNNs) show good performance in both video and audio fields. Convolutional Neural Networks (CNNs) can also be trained via standard back-propagation. Convolutional neural networks (CNNs) are easier to train than other feedforward artificial neural network techniques and have the advantage of using fewer parameters.
  • CDBN Convolutional Deep Belief Network
  • a recurrent neural network refers to a neural network in which connections between units constituting an artificial neural network constitute a directed cycle.
  • a recurrent neural network can utilize memory inside the neural network to process arbitrary inputs. Due to these characteristics, the recurrent neural network is used in fields such as handwriting recognition and shows a high recognition rate.
  • the camera unit 150 includes an image sensor (not shown) that captures an image of an object and photoelectrically converts the image into an image signal, and captures a biometric image of the object.
  • the captured biometric image of the object may be provided to the processor 111 through the input/output interface 117 and processed based on the machine learning model 13 or stored in the memory unit 113 or the storage device 115.
  • the photographed biometric image of the object may be provided to the remote computing devices 200, 300, and 400 to be described later through an Internet network.
  • FIG. 6 is a diagram schematically illustrating a biometric image reading support system that performs a method of supporting reading of a biometric image according to an embodiment of the present invention.
  • a biometric image reading support system 500 includes a computing device 310, a display device 330, a camera 350, and one or more It may include a remote computing device (Remote Compting Device, 200, 300, 400).
  • the computing device 310 and the remote computing devices 200, 300, and 400 may be connected to each other through an Internet network.
  • Components included in the computing device (Computing Device) 310 are similar to the corresponding components in FIG. 5 described above, so descriptions of operations and functions thereof will be omitted.
  • components included in each of the remote computing devices 200 , 300 , and 400 are similar to components of the computing device 310 .
  • the computing device 310 and the remote computing devices 200, 300, and 400 may be configured to perform one or more of the methods, functions, and/or operations presented in the embodiments of the present invention.
  • These computing devices 310, 200, 300, and 400 may include an application running on at least one computing device.
  • computing devices 310, 200, 300, and 400 may include one or more computers and one or more databases, and may be single devices, distributed devices, cloud-based computers, or combinations thereof.
  • the biometric image reading device is not limited to laptop computers, desktop computers and servers, and can be implemented in a computing device or system capable of processing data and executing arbitrary commands, and can be used for other computing devices through an Internet network. It can be implemented as devices and systems.
  • the biometric image reading device may be implemented in various ways including software including firmware, hardware, or a combination thereof.
  • functions to execute in various ways may be performed by components implemented in various ways, including discrete logic components, one or more application specific integrated circuits (ASICs), and/or program control processors.
  • FIG. 7 is a diagram illustrating a screen displayed on a display unit of an apparatus for supporting reading of a biometric image according to an embodiment of the present invention.
  • the screen displayed on the display unit 130 may be provided as a graphical user interface (GUI) screen capable of adjusting attribute information of a biological image (eg, a third biological image, 50).
  • GUI graphical user interface
  • Property information of the biometric image created by the user may be obtained through the graphic user interface screen.
  • the displayed screen may include a biometric image screen unit 130a and an image style control screen unit 130b.
  • a first biological image 1 for reading and diagnosing based on attribute information and a third biological image 50 for comparison and determination with the first biological image 1 may be displayed on the biometric image display unit 130a. .
  • a scroll bar 131 for adjusting the style of the second biological image may be displayed on the image style adjusting screen unit 130b.
  • Styles (adversarial style images) may be created in various ways by the methods described above in FIGS. 2 to 4 , and various styles may have respective attribute information.
  • the attribute information may be tissue attribute information labeled according to the type of style, and the style of the second biological image (not shown) may be changed according to the adjustment of the attribute information.
  • various information including C/D ratio, retinal nerve fiber layer, disc rim thinning, and the like as attribute information are displayed on the style adjustment screen 130b. can be displayed
  • the third biometric image 50 expressed by freely adjusting the style (attribute information) of the second biometric image by the biometric image reading device 100 according to the present invention and the first biometric image 1 to be read ) is displayed on the screen of the display unit 130, since the first biological image 1 can be compared and read with the third biological image 50, more reliable reading is possible, and the reading result is logical. can be explained, so accurate reading can be made.
  • FIG. 8 is a flowchart exemplarily illustrating a biometric image read support method according to an embodiment of the present invention.
  • step S810 when a first biological image of an object is acquired from an external device, for example, a camera or another device (not shown) interworking with the computing device 110, the processor 111 ) extracts the first attribute information from the first biological image based on the first machine learning model (3).
  • the first biological image may be stored in the memory unit 113 or the storage device 115 .
  • step S920 the style of the second biological image 10 is modified based on the second machine learning model 30, and the third biological image 50 is generated by the processor 111.
  • the second biometric image 10 is an image paired with the first biometric image
  • the second biometric image 10 is based on the first sub machine learning model 30a of the second machine learning model 30.
  • the style is modified by the processor 111, and accordingly, a plurality of adversarial style images 10a may be generated from the second biological image 10.
  • the second sub-machine learning model 30b selects one of the plurality of adversarial style images 10a and fuses the inputted second biological image 10 (Fusion & Aggregation) to form the third biological image 50. ) can be created. Accordingly, second attribute information comparable to the first attribute information 1a of the first biological image 1 may be reflected in the third biometric image 50 .
  • the first biometric image having the first attribute information and the third biometric image having the second attribute information are displayed on the display unit.
  • the displayed screen may include a biometric image screen unit 130a and an image style control screen unit 130b.
  • a first biometric image 1 for reading and diagnosing based on attribute information and a third biometric image 50 for comparison and determination with the first biometric image 1 may be displayed.
  • a scroll bar 131 for adjusting the style of the second biological image (not shown) may be displayed on the image style adjustment screen 130b.
  • the device and method for supporting biometric image reading according to the present invention can support explanatory biometric image reading in reading a biometric image based on a machine learning model, enabling medical staff to read biometric images more reliably. have.
  • the present invention can be achieved through a combination of software and hardware or can be achieved only by hardware.
  • Objects of the technical solution of the present invention or parts contributing to the prior art may be implemented in the form of program instructions that can be executed through various computer components and recorded on a machine-readable recording medium.
  • the machine-readable recording medium may include program commands, data files, data structures, etc. alone or in combination.
  • Program instructions recorded on the machine-readable recording medium may be specially designed and configured for the present invention, or may be known and usable to those skilled in the art of computer software.
  • machine-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and magneto-optical media such as floptical disks. media), and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • program instructions include high-level language codes that can be executed by a computer using an interpreter or the like as well as machine language codes such as those produced by a compiler.
  • the hardware device may be configured to act as one or more software modules to perform processing according to the present invention and vice versa.
  • the hardware device may include a processor such as a CPU or GPU coupled to a memory such as ROM/RAM for storing program instructions and configured to execute instructions stored in the memory, and may transmit and receive signals with external devices.
  • a communication unit may be included.
  • the hardware device may include a keyboard, mouse, and other external input devices for receiving commands written by developers.

Abstract

Provided are a biometric image reading support device and method, the device comprising: a processor; and a memory including one or more instructions implemented to be performed by the processor, wherein the processor extracts first attribute information from a first biometric image on the basis of a first machine learning model, transforms the style of a second biometric image paired with the first biometric image on the basis of a second machine learning model to generate a third biometric image having second attribute information comparable with the first attribute information, and displays the first biometric image having the first attribute information and the third biometric image having the second attribute information on a display unit.

Description

설명 가능한 생체 이미지 판독 지원 장치 및 방법 Describeable biometric image reading support device and method
본 출원은 의료분야의 생체 이미지 판독함에 있어서, 설명 가능한 판독 지원 방법과 그 방법을 사용하는 장치 및 시스템에 관한 것이다.The present application relates to an explanatory reading support method and an apparatus and system using the method in reading a biometric image in the medical field.
최근 인공지능 학습모델이 발달함에 따라 의료영상을 판독함에 많은 기계 학습모델을 이용하고 있다. 예를들어, 합성곱 신경망(Convolutional Neural Networks, 심층 신경망(Deep neural networks), 순환 신경망(Recurrent Neural Network), 심층 신뢰 신경망(Deep Belief Networks)과 같은 학습모델은 의료영상 이미지를 detection, classification, feature learning을 하는데 적용되고 있다.Recently, with the development of artificial intelligence learning models, many machine learning models are being used to read medical images. For example, learning models such as convolutional neural networks, deep neural networks, recurrent neural networks, and deep belief networks can be used to detect, classify, and feature medical image images. It is applied to learning.
기계 학습모델은 피검사자의 질환을 예측하기 위하여 영상 판독(finding/diagnosis)을 지원하는데 사용되고 있다. 좀더 구체적으론, 피검자의 의료영상을 획득하여 기계학습모델에 기반하여 속성정보를 추출하여 진단자에게 제공하고, 속성정보를 기반으로 질환을 예측하게 된다. 이때, 속성정보는 의료영상에 포함된 여러가지 정보를 포함한다.A machine learning model is used to support image reading (finding/diagnosis) in order to predict a test subject's disease. More specifically, a medical image of a subject is acquired, attribute information is extracted based on a machine learning model, provided to a diagnoser, and a disease is predicted based on the attribute information. At this time, the attribute information includes various information included in the medical image.
그러나 위와 같이 학습모델에 기초하여 영상의 속성 정보를 추출한다 하더라도, 학습모델에 입력되는 학습정보가 차이가 날 경우, 의료진과 같은 엔티티(entity)로 하여금 잘못된 정보를 주게 된다. 예를들어, 학습정보의 차이는 학습모델에 입력되는 학습 데이터의 부족, 영상 촬영 환경(예, 건강검진센터, 개인 안과병원, 안과 전문 종합병원)의 차이, 집단(예, 오로지 정상인, 오로지 비정상인, 정상인과 비정상인 혼합)의 차이, 영상촬영 기기의 차이 등일 수 있다. 이러한 여러 가지 요인은 의료진으로 하여금 피검사자의 질환을 잘못 예측하게 한다.However, even if the attribute information of the image is extracted based on the learning model as described above, if the learning information input to the learning model is different, an entity such as a medical staff may give wrong information. For example, differences in learning information include lack of learning data input to the learning model, differences in imaging environments (e.g., health checkup center, private eye hospital, general hospital specializing in ophthalmology), and group (e.g., only normal people, only abnormal people). It may be due to the difference between normal and abnormal people), differences in imaging equipment, and the like. These various factors cause medical staff to mispredict the disease of the test subject.
따라서 학습정보가 차이가 나더라도, 의료영상에 대한 보다 더 정확한 질환 예측 및 질환 예측을 뒷받침할 수 있는 설명 등에 관한 방법 및 기기가 필요한 실정이다.Therefore, even if there is a difference in learning information, there is a need for more accurate disease prediction for medical images and methods and devices for explanations that can support disease prediction.
본 발명의 실시예는 기계 학습모델에 기초하여 생체 이미지를 판독함에 있어 정확성 및 신뢰성이 있는 판독을 할 수 있는 생체 이미지 판독 지원 방법 및 장치를 제공하고자 한다.Embodiments of the present invention are intended to provide a biometric image reading support method and apparatus capable of accurately and reliably reading a biometric image based on a machine learning model.
본 발명의 실시예는 생체 이미지에 대한 판독의 이유를 설명할 수 있는 생체 이미지 판독 지원 방법 및 장치를 제공하고자 한다.An embodiment of the present invention is to provide a biometric image read support method and apparatus capable of explaining the reason for reading the biometric image.
본 출원의 과제는 이상에서 언급한 과제로 제한되지 않으며, 언급되지 않는 또 다른 과제는 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The objects of the present application are not limited to the tasks mentioned above, and other tasks not mentioned will be clearly understood by those skilled in the art from the description below.
본 발명의 일측면에 따르면, 프로세서; 및 상기 프로세서에 의해 수행되도록 구현된 하나 이상의 명령어(instructions)를 포함하는 메모리를 포함하고, 상기 프로세서는, 제1기계학습 모델에 기초하여 제1생체 이미지로부터 제1속성정보를 추출하고, 제2기계학습 모델에 기초하여 상기 제1생체 이미지와 쌍을 이루는 제2생체 이미지의 스타일을 변형하여 상기 제1속성정보와 비교가능한 제2속성정보를 갖는 제3 생체 이미지를 생성하고, 상기 제1속성정보를 갖는 제1생체 이미지와 상기 제2 속성정보를 갖는 제3생체 이미지를 디스플레이부에 디스플레이한다.According to one aspect of the invention, a processor; and a memory including one or more instructions implemented to be executed by the processor, wherein the processor extracts first attribute information from a first biological image based on a first machine learning model; A third biometric image having second attribute information comparable to the first attribute information is generated by modifying a style of a second biometric image paired with the first biometric image based on a machine learning model, and the first attribute information The first biological image having information and the third biological image having the second attribute information are displayed on the display unit.
제2기계학습 모델은, 제1서브기계학습 모델과 제2서브기계학습 모델을 포함하고, 상기 제1서브기계학습 모델에 기초하여 상기 제2생체 이미지는 스타일이 변형되어 적대적 스타일 이미지로 생성될 수 있다.The second machine learning model includes a first sub machine learning model and a second sub machine learning model, and based on the first sub machine learning model, the style of the second biometric image is modified to be generated as an adversarial style image. can
스타일은, 상기 제2생체 이미지에 적대적 노이즈가 맵핑되어 변형될 수 있다.The style may be modified by mapping hostile noise to the second biological image.
적대적 노이즈는, 상기 제 2생체 이미지의 R, G, B 픽셀의 계조 레벨(Gradation level)를 조절하는 값, 상기 제2생체 이미지의 R, G, B 픽셀의 색(Color)의 변화를 조절할 수 있는 값, 상기 제 2생체 이미지의 명암비(Contrast Ratio)를 조절하는 값 중 적어도 어느 하나를 포함할 수 있다.The hostile noise may control values for adjusting gradation levels of R, G, and B pixels of the second living body image, and changes in color of R, G, and B pixels of the second living body image. value, and a value for adjusting the contrast ratio of the second living body image.
제2서브기계학습 모델은, 상기 제2생체 이미지를 기반으로 상기 적대적 스타일 이미지를 융합하여 상기 제3생체 이미지를 생성할 수 있다.The second sub-machine learning model may generate the third biological image by fusing the hostile style image based on the second biological image.
디스플레이부에 표시된 화면은 상기 제3생체 이미지가 디스플레이되는 생체 이미지 화면부와 상기 제2생체 이미지의 스타일을 조절할 수 있는 스크롤바가 표시된 이미지 스타일 조절화면부를 포함할 수 있다.The screen displayed on the display unit may include a biometric image screen for displaying the third biological image and an image style adjusting screen for displaying a scroll bar for adjusting the style of the second biological image.
본 발명의 실시예에 따르면, 기계 학습모델에 기초한 생체 이미지를 판독하는데 있어서, 보다 정확성이 높은 판독을 지원할 수 있는 효과가 있다.According to an embodiment of the present invention, in reading a biometric image based on a machine learning model, there is an effect of supporting more accurate reading.
또한, 기계학습 모델에 기초하여, 생체 이미지를 판독함에 있어서 비교 가능한 생체 이미지들을 생성하고 이를 비교함으로써 판독 결과에 대해 설명이 가능하고, 보다 더 신뢰성 있는 생체 이미지 판독을 가능하게 할 수 있는 효과가 있다.In addition, based on the machine learning model, in reading the biometric image, it is possible to explain the reading result by generating comparable biometric images and comparing them, and there is an effect of enabling more reliable biometric image reading. .
본 출원의 효과는 이상에서 언급한 효과로 제한되지 않으며, 언급되지 않는 또 다른 효과는 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.Effects of the present application are not limited to the effects mentioned above, and other effects not mentioned will be clearly understood by those skilled in the art from the description below.
도 1은 본 발명의 일 실시예에 따른 생체 이미지 판독장치로 생체 이미지의 속성정보를 생성하는 과정을 예시적으로 나타낸 도이다. 1 is a diagram exemplarily illustrating a process of generating attribute information of a biometric image by a biometric image reading device according to an embodiment of the present invention.
도 2a는 본 발명의 일 실시예에 따른 생체 이미지 판독장치에 의해 복수의 적대적 스타일 이미지가 생성되는 과정을 예시적으로 나타낸 도이다.2A is a diagram exemplarily illustrating a process of generating a plurality of adversarial style images by a biometric image reading device according to an embodiment of the present invention.
도 2b는 본 발명의 일 실시예에 따른 생체 이미지 판독장치에 의해 예측값에 따라 단일 적대적 스타일 이미지가 생성되는 과정을 예시적으로 나타낸 도이다. 2B is a diagram illustratively illustrating a process of generating a single adversarial style image according to predicted values by a biometric image reading device according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 생체 이미지 판독장치로 제1생체 이미지와 비교대상인 제3생체 이미지를 생성하는 과정을 예시적으로 나타낸 도이다.FIG. 3 exemplarily illustrates a process of generating a third living body image to be compared with a first living body image by a living body image reading device according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 생체 이미지 판독장치에 의해 생성된 제2생체 이미지와 적대적 스타일를 예시적으로 나타낸 도이다. 4 is a diagram illustrating a second biological image generated by a biometric image reading device and an adversarial style according to an exemplary embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따라 생체 이미지의 판독을 지원하는 방법을 수행하는 생체 이미지 판독 지원장치의 예시적 구성을 개략적으로 나타낸 도이다.5 is a diagram schematically illustrating an exemplary configuration of an apparatus for supporting reading of a biometric image that performs a method of supporting reading of a biometric image according to an embodiment of the present invention.
도 6은 본 발명의 일 실시예에 따라 생체 이미지의 판독을 지원하는 방법을 수행하는 생체 이미지 판독 지원 시스템을 개략적으로 나타낸 도이다.6 is a diagram schematically illustrating a biometric image reading support system that performs a method of supporting reading of a biometric image according to an embodiment of the present invention.
도 7은 본 발명의 일 실시예에 따른 생체 이미지의 판독 지원장치의 디스플레이부에 표시된 화면을 나타낸 도이다.7 is a diagram illustrating a screen displayed on a display unit of an apparatus for supporting reading of a biometric image according to an embodiment of the present invention.
도 8은 본 발명의 일 실시예에 따른 생체 이미지 판독 지원 방법을 예시적으로 나타낸 흐름도이다.8 is a flowchart exemplarily illustrating a biometric image read support method according to an embodiment of the present invention.
이하 본 발명의 실시예에 대하여 첨부한 도면을 참조하여 상세하게 설명하기로 한다. 다만, 첨부된 도면은 본 발명의 내용을 보다 쉽게 개시하기 위하여 설명되는 것일 뿐, 본 발명의 범위가 첨부된 도면의 범위로 한정되는 것이 아님은 이 기술분야의 통상의 지식을 가진 자라면 용이하게 알 수 있을 것이다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. However, the accompanying drawings are only described in order to more easily disclose the contents of the present invention, and those skilled in the art can easily understand that the scope of the present invention is not limited to the scope of the accompanying drawings. You will know.
또한, 본 발명의 상세한 설명 및 청구항들에서 사용한 용어는 단지 특정한 실시예를 설명하기 위해 사용된 것으로, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. In addition, terms used in the detailed description and claims of the present invention are only used to describe specific embodiments, and are not intended to limit the present invention. Singular expressions include plural expressions unless the context clearly dictates otherwise.
본 발명의 상세한 설명 및 청구항들에서, "포함하다" 또는 "가지다" 등의 용어는 명세서상에 기재된 특징, 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.In the detailed description and claims of the present invention, terms such as "comprise" or "have" are intended to indicate that there is a feature, number, step, operation, component, part, or combination thereof described in the specification, It should be understood that it does not preclude the possibility of the presence or addition of one or more other features, numbers, steps, operations, components, parts, or combinations thereof.
본 발명의 상세한 설명 및 청구항들에서, "학습", 혹은 "러닝" 등의 용어는 절차에 따른 컴퓨팅(computing)을 통하여 기계학습(machine learning)을 수행함을 일컫는 용어인바, 인간의 교육활동과 같은 정신적 작용을 지칭하도록 의도된 것이 아닌 것으로 이해되어야 한다.In the detailed description and claims of the present invention, terms such as "learning" or "learning" refer to performing machine learning through procedural computing, such as human educational activities. It should be understood that it is not intended to refer to mental processes.
본 발명의 상세한 설명 및 청구항들에서 사용한 "이미지"의 용어는 사람 또는 사물의 형태 또는 그의 구체적인 특성을 디지털 형태로 복제 또는 모방한 것으로 정의될 수 있고, 이미지는 JPEG 이미지, PNG 이미지, GIF 이미지, TIFF 이미지 또는 당 업계에 알려진 임의의 다른 디지털 이미지 형식 일 수 있지만 이에 제한되지는 않는다. 또한, "이미지"는 "사진"과 같은 의미로 사용될 수 있다.The term "image" used in the detailed description and claims of the present invention can be defined as a digital copy or imitation of the shape or specific characteristics of a person or object, and the image is a JPEG image, PNG image, GIF image, It may be, but is not limited to, a TIFF image or any other digital image format known in the art. Also, "image" may be used in the same sense as "photograph".
본 발명의 상세한 설명 침 청구항들에서 사용한 "속성"은 질병(병변)을 구별할 수 있는 대상체의 하나 이상의 설명적 특성의 그룹으로 정의될 수 있고, "속성"은 숫자적 특징으로 표현될 수 있다.The "attribute" used in the detailed description of the present invention and claims may be defined as a group of one or more descriptive characteristics of a subject capable of distinguishing a disease (lesion), and the "attribute" may be expressed as a numerical characteristic. .
본 발명에 개시된 장치, 방법, 시스템 및 디바이스 등은 안저 및 골관절 구조의 의료 이미지 또는 질병상태의 진단을 지원할 수 있는 임의의 다른 생물학적 조직 이미지에 적용하여 사용될 수 있지만, 이에 한정되지 않고, 컴퓨터 단층 촬영 (CT), 자기 공명 영상 (MRI), 컴퓨터 방사선 촬영, 자기 공명, 혈관 내시경, 광 간섭 단층 촬영, 컬러 플로우 도플러, 방광경 검사, 디아파노그래피(diaphanography), 심장 초음파 검사, 플루오레소신 혈관 조영술(fluoresosin angiography), 복강경 검사, 자기 공명 혈관 조영술, 양전자 방출 단층 촬영(positron emission tomography), 단일 광자 방출 컴퓨터 단층 촬영, X선 혈관 조영술, 핵의학, 생체 자기 영상, culposcopy, 이중 도플러, 디지털 현미경, 내시경, 레이저, 표면 스캔, 자기 공명 분광법, 방사선 그래픽 이미징, 열 화상 촬영 및 방사선 형광 검사에 사용될 수 있다. The apparatus, method, system, and device disclosed in the present invention may be applied to and used for medical images of fundus and osteoarthritis structures or any other biological tissue images capable of supporting the diagnosis of disease states, but are not limited thereto, and computed tomography scans. (CT), magnetic resonance imaging (MRI), computed radiography, magnetic resonance, angioscopy, optical coherence tomography, color flow Doppler, cystoscopy, diaphanography, echocardiography, fluorescein angiography ( fluoresosin angiography), laparoscopy, magnetic resonance angiography, positron emission tomography, single photon emission computed tomography, X-ray angiography, nuclear medicine, biomagnetic imaging, culposcopy, double Doppler, digital microscopy, endoscopy , laser, surface scanning, magnetic resonance spectroscopy, radiographic imaging, thermal imaging and radiofluorescence.
더욱이 본 발명은 본 명세서에 표시된 실시예들의 모든 가능한 조합들을 망라한다. 본 발명의 다양한 실시예는 서로 다르지만 상호 배타적일 필요는 없음이 이해되어야 한다. 예를 들어, 여기에 기재되어 있는 특정 형상, 구조 및 특성은 일 실시예에 관련하여 본 발명의 사상 및 범위를 벗어나지 않으면서 다른 실시예로 구현될 수 있다. 또한, 각각의 개시된 실시예 내의 개별 구성요소의 위치 또는 배치는 본 발명의 사상 및 범위를 벗어나 지 않으면서 변경될 수 있음이 이해되어야 한다. 따라서, 후술하는 상세한 설명은 한정적인 의미로서 취하려는 것이 아니며, 본 발명의 범위는, 적절하게 설명된다면, 그 청구항들이 주장하는 것과 균등한 모든 범위와 더불어 첨부된 청구항에 의해서만 한정된다. 도면에서 유사한 참조부호는 여러 측면에 걸쳐서 동일하거나 유사한 기능을 지칭한다Moreover, the present invention covers all possible combinations of the embodiments shown herein. It should be understood that the various embodiments of the present invention are different from each other but are not necessarily mutually exclusive. For example, specific shapes, structures, and characteristics described herein may be implemented in one embodiment in another embodiment without departing from the spirit and scope of the invention. Additionally, it should be understood that the location or arrangement of individual components within each disclosed embodiment may be changed without departing from the spirit and scope of the invention. Accordingly, the detailed description set forth below is not to be taken in a limiting sense, and the scope of the present invention, if properly described, is limited only by the appended claims, along with all equivalents as claimed by those claims. Like reference numbers in the drawings indicate the same or similar function throughout the various aspects.
도 1은 본 발명의 일 실시예에 따른 생체 이미지 판독장치로 생체 이미지의 속성정보를 생성하는 과정을 예시적으로 나타낸 도이다. 1 is a diagram exemplarily illustrating a process of generating attribute information of a biometric image by a biometric image reading device according to an embodiment of the present invention.
일반적으로, 임의의 대상체로부터 안저 이미지와 같은 생체 이미지를 취득하면, 기계 학습모델을 이용하여 상기 생체 이미지에서 병의 속성정보를 도출해 낸다. 그러나 이와 같은 생체 이미지의 속성정보는 앞서 설명한 바와 같이 기계 학습모델에 입력되는 학습정보의 차이로 인해 신뢰성 있는 속성정보를 도출해 내지 못하게 된다. 또한, 1차적으로 생체 이미지의 속성정보를 도출했다 하더라도 이를 통한 병의 예측 및 판단함에 있어서 뒷받침할 수 있는 설명의 부족으로 신뢰성있는 생체 이미지의 판독 및 진단을 위한 지원이 되지 못하게 된다.In general, when a biometric image such as a fundus image is acquired from an arbitrary object, disease attribute information is derived from the biometric image using a machine learning model. However, as described above, the attribute information of such a biometric image cannot derive reliable attribute information due to the difference in learning information input to the machine learning model. In addition, even if the attribute information of the biometric image is primarily derived, support for reliable reading and diagnosis of the biometric image cannot be provided due to a lack of explanation that can be supported in predicting and judging a disease through this.
본 발명의 일 실시예에 따른 생체 이미지 판독 지원장치는 이와 같은 신뢰성이 부족한 생체 이미지 판독 지원을 해결하기 위해, 도출해낸 생체 이미지의 속성정보가 신뢰성을 갖고, 의료진이 정확한 판독을 하는데 지원할 수 있는 생체 이미지의 속성정보를 자동으로 생성해 낼 수 있다. 즉, 본 발명에 따른 생체 이미지 판독 지원장치(100)는 대상체의 생체 조직에 대해 카메라 등을 통해 1차적으로 촬영한 제1생체 이미지(1)으로부터 속성정보를 알아내고, 2차적으로 속성정보를 포함한 대상체의 제1생체 이미지(1)가 실제 신뢰성 있는 생체 이미지인지를 판단 혹은 설명하는데 도움을 줄 수 있다.In order to solve such unreliable biometric image reading support according to an embodiment of the present invention, the attribute information of the derived biometric image has reliability and can assist medical staff in accurately reading the biometric image. Image attribute information can be automatically generated. That is, the biometric image reading support apparatus 100 according to the present invention finds attribute information from the first biological image 1 primarily photographed through a camera or the like with respect to the biological tissue of an object, and secondarily acquires attribute information. It may be helpful in determining or explaining whether the first biological image 1 of the target object is an actual reliable biological image.
상술한 바와 같은 생체 이미지의 속성정보를 생성하기 위해, 생체 이미지 판독 지원장치(100)는 촬영된 제1생체 이미지(1)으로부터 제1기계 학습모델(Machine Learning Model,3)을 이용하여 제1생체 이미지(1)의 제 1속성정보(5)를 추출하고, 추출된 상기 제1생체 이미지(1)의 제1 속성정보(5)는 후술할 시스템 메모리부(113)나 스토리지 디바이스(115)에 저장될 수 있다. 제1 기계 학습모델(3)은 프로세서(111)에 입력되어 수행될 수 있고, 도시되지 않았지만 컴퓨터 판독 가능한 기록매체에 입력되어 수행될 수 있다. 또한, 기계 학습 모델은 메모리부(113)나 스토리지 디바이스(115)에 입력되고, 상기 프로세서(111)에 의해 동작되어 수행될 수 있다.In order to generate the attribute information of the biometric image as described above, the biometric image reading support apparatus 100 uses a first machine learning model (Machine Learning Model, 3) from the captured first biometric image (1) to obtain a first biometric image. The first attribute information 5 of the biometric image 1 is extracted, and the extracted first attribute information 5 of the first biometric image 1 is stored in a system memory unit 113 or a storage device 115 to be described later. can be stored in The first machine learning model 3 may be input to the processor 111 and executed, or may be input to a computer readable recording medium (not shown) and executed. Also, the machine learning model may be input to the memory unit 113 or the storage device 115 and operated by the processor 111 to be performed.
추가로, 생체 이미지 판독 지원장치(100)는 대상체, 예를들어, 대상체(환자)의 임상정보를 후술할 메모리부(113)나 스토리지 디바이스(115)에 추가 저장할 수 있다. 프로세서(111)는 메모리나 스토리지 디바이스에 저장된 대상체의 임상정보를 활용하여 기계 학습모델에 기초한 제 1생체 이미지(1)의 제1속성정보(5)를 추출할 수 있다. 임상정보는 대상체(환자)의 나이, 성별, 병력, 문진정보, 검사 측정 값, 운동습관, 식습관, 상기 병력과 관련된 가족력, 환자의 음주량 및 흡연 여부 등이 될 수 있고, 이에 한정되지 않는다. 문진정보라 함은 의사가 환자에게 수행할 수 있는 신경의학적 문진등을 포함할 수 있고, 이는 병력과 다르게 지금 현재 관찰되는 이상적 소견을 의미할 수 있다. 또한, 검사 측정값은 예를들어, Intraocular pressure을 측정한 값, 혈압, 혈중 당수치 등이 고려될 수 있다.In addition, the biometric image reading support apparatus 100 may additionally store clinical information of an object, eg, an object (patient), in the memory unit 113 or the storage device 115, which will be described later. The processor 111 may extract the first attribute information 5 of the first biological image 1 based on a machine learning model by utilizing clinical information of an object stored in a memory or a storage device. Clinical information may include, but is not limited to, age, gender, medical history, medical examination information, test measurement values, exercise habits, eating habits, family history related to the above medical history, alcohol consumption and smoking status of the subject (patient). The medical history information may include a neurological medical examination that a doctor may perform on a patient, and may mean an ideal finding currently observed, unlike a medical history. In addition, as the test measurement value, for example, a value obtained by measuring intraocular pressure, blood pressure, blood glucose level, and the like may be considered.
제 1 속성정보(5)는 의료진과 같은 엔티티(entity)가 병명을 예측 및 진단하는데 지원할 수 있는 정보이다. 예를 들어, 제1생체 이미지(1)가 안저 이미지일 경우, 제1 속성정보(5)는 안저 이미지에 포함된 Increased C/D 비(Cup-to-Disc ratio)의 정보, Disc Rim Thinning에 대한 정보, 망막신경섬유층 결손(Retinal Nerve Fiber Layer Defect)에 관한 정보, 망막 출혈(Retinal Hemorrhage)에 대한 정보 등을 포함할 수 있고, 이들 정보는 조직의 위치, 두께, 모양, 픽셀 정보일 수 있지만 이에 한정되지 않고, 각 조직의 속성을 파악할 수 있는 정보이면 어느 것이든 포함될 수 있다. 이러한 조직의 정보로부터 장치(100)가 대상체의 안저 이미지에서 녹내장(Glaucoma)을 예측 및 진단할 수 있다. The first attribute information 5 is information that can be supported by an entity such as a medical staff in predicting and diagnosing a disease. For example, when the first living body image 1 is a fundus image, the first attribution information 5 is information on an increased C/D ratio (Cup-to-Disc ratio) included in the fundus image, disc rim thinning information, information on retinal nerve fiber layer defect, information on retinal hemorrhage, etc., and these information may be tissue position, thickness, shape, pixel information It is not limited thereto, and any information capable of identifying the attributes of each tissue may be included. Based on this tissue information, the device 100 may predict and diagnose glaucoma in the fundus image of the subject.
또한, 제1생체 이미지(1)가 골관절 이미지일 경우, 제1 속성정보(5)는 골관절 이미지에 포함된 관절 간격(Joint space narrowing), 골극(Osteophyte), 경화(Sclerosis), 결손(Bone end deformity)에 대한 정보 등을 포함할 수 있다.In addition, when the first biological image 1 is a bone joint image, the first attribute information 5 includes joint space narrowing, osteophyte, sclerosis, and bone end defects included in the bone joint image. deformity) and the like.
제2 생체 이미지(10)는 제1생체 이미지(1)와 쌍을 이루는 이미지로써, 제1생체 이미지(1)가 대상이 될 수 있고, 제1생체 이미지의 기반이 될 수 있는 이미지이면 어떤 이미지든 사용 가능하고, 제2생체 이미지(10)는 제2기계학습 모델(30)에 기초하여 스타일(Style)이 변형되고, 프로세서(111)에 의해 제3생체 이미지(50)가 생성될 수 있다. The second biometric image 10 is an image paired with the first biometric image 1, and any image of which the first biometric image 1 can be a target and which can be the basis of the first biometric image 10 All can be used, the style of the second biometric image 10 is modified based on the second machine learning model 30, and the third biometric image 50 can be generated by the processor 111. .
구체적으로, 제2기계학습 모델(30)은 제1서브기계학습 모델(30a)과 제2서브기계학습 모델(30b)를 포함할 수 있다. 제2생체 이미지(10)는 제1서브기계학습 모델(30a)에 기초하여 프로세서(111)에 의해 스타일이 변형되고, 이에 따라 제2생체 이미지(10)로부터 복수의 적대적 스타일 이미지(Adversarial Style image, 10a)가 생성될 수 있다. 생성된 상기 적대적 스타일 이미지(10a)는 후술할 시스템 메모리부(113)나 스토리지 디바이스(115)에 저장될 수 있다. 제1서브기계학습 모델(30a)은 프로세서(111)에 입력되어 수행될 수 있고, 도시되지 않았지만 컴퓨터 판독 가능한 기록매체에 입력되어 수행될 수 있다. 또한, 제1기계학습 모델(30a)은 메모리부(113)나 스토리지 디바이스(115)에 입력되고, 상기 프로세서(111)에 의해 동작되어 수행될 수 있다. 적대적 스타일 이미지(10a)는 제2생체 이미지(10)에 노이즈가 맵핑되어 제1생체 이미지(1)의 제1속성정보(1a)와 비교가능한 제2속성정보를 갖는 이미지로써, 적대적 스타일 이미지 (10a)생성방법에 대해선 후술하기로 한다. 제2속성정보는 앞서 상술한 제1속성정보와 유사하여 이에 대한 설명은 생략하기로 한다. Specifically, the second machine learning model 30 may include a first sub machine learning model 30a and a second sub machine learning model 30b. The style of the second biological image 10 is transformed by the processor 111 based on the first sub-machine learning model 30a, and accordingly, a plurality of adversarial style images from the second biological image 10 (Adversarial Style image) , 10a) can be generated. The generated hostile style image 10a may be stored in a system memory unit 113 or a storage device 115 to be described later. The first sub machine learning model 30a may be input to the processor 111 and executed, or may be input to a computer readable recording medium (not shown) and executed. In addition, the first machine learning model 30a may be input to the memory unit 113 or the storage device 115 and operated by the processor 111 to be performed. The hostile style image 10a is an image having second attribute information comparable to the first attribute information 1a of the first biological image 1 by mapping noise to the second biological image 10, and is a hostile style image ( 10a) The production method will be described later. The second attribute information is similar to the first attribute information described above, so a description thereof will be omitted.
제2서브기계학습 모델(30b)는 복수의 적대적 스타일 이미지(10a) 중 어느 하나를 선택하고, 입력된 제2생체 이미지(10)을 융합(Fusion & Aggregation)하여 제3생체 이미지(50)를 생성할 수 있다. 이에 따라 제3 생체 이미지(50)에는 제1생체 이미지(1)의 제1속성정보(1a)와 비교가능한 제2속성정보가 반영될 수 있다.The second sub-machine learning model 30b selects one of the plurality of adversarial style images 10a and fuses the input second body image 10 to form a third body image 50. can create Accordingly, second attribute information comparable to the first attribute information 1a of the first biological image 1 may be reflected in the third biometric image 50 .
도 2a는 본 발명의 일 실시예에 따른 생체 이미지 판독장치에 의해 복수의 적대적 스타일 이미지가 생성되는 과정을 예시적으로 나타낸 도이고, 도 2b는 본 발명의 일 실시예에 따른 생체 이미지 판독장치에 의해 예측값에 따라 단일 적대적 스타일 이미지가 생성되는 과정을 예시적으로 나타낸 도이다. 2A is a diagram illustrating a process of generating a plurality of adversarial style images by a biometric image reading device according to an embodiment of the present invention, and FIG. 2B is a diagram illustrating a biometric image reading device according to an embodiment of the present invention. It is a diagram showing the process of generating a single adversarial style image according to predicted values by exemplarily.
도 2a를 참조하면, 프로세서(111)는 제2생체 이미지(10)에 대해 적대적 노이즈(11a, 11b, 11c,쪋)를 맵핑하여 적대적 스타일 이미지(10a1, 10a2, 10a3,쪋)를 생성할 수 있다. 이때, 적대적 노이즈(11a, 11b, 11c,쪋)는 제2생체 이미지(10)의 특정 영역, 예를들어, Cup-to-Disc ratio 영역, Disc Rim Thinning (DRT) 영역, 망막신경섬유층(Retinal Nerve Fiber Layer, RNFL) 영역 등에 동일하게 혹은 독립적으로 다르게 맵핑될 수 있는 노이즈이고, 적대적 스타일 이미지(10a1, 10a2, 10a3,쪋)는 각각의 적대적 노이즈(11a, 11b, 11c,쪋)에 의해 맵핑되어 생성된 이미지이다. Referring to FIG. 2A , the processor 111 may generate hostile style images 10a1, 10a2, and 10a3 by mapping hostile noises 11a, 11b, and 11c to the second biological image 10. have. At this time, the hostile noises 11a, 11b, and 11c are generated in a specific region of the second living image 10, for example, a Cup-to-Disc ratio region, a Disc Rim Thinning (DRT) region, and a retinal nerve fiber layer (Retinal Nervous Fiber Layer). It is noise that can be mapped identically or independently differently to the Nerve Fiber Layer, RNFL) area, etc., and the hostile style images (10a1, 10a2, 10a3, ) are mapped by each hostile noise (11a, 11b, 11c, ) It is an image created by
도 2b를 참조하면, 제2생체 이미지(10)에 대해 적대적 노이즈(Adversarial Noise, 11)를 맵핑(Mapping)하여, 제 2생체 이미지의 속성정보를 변화시켜 얻어진 제 2속성정보(40)를 포함하는 적대적 스타일 이미지(AS Image,10a)를 생성한다. 이때, 적대적 노이즈(11)를 맵핑하는 것은 프로세서(111)에 의해 수행될 수 있다. Referring to FIG. 2B, it includes second attribute information 40 obtained by changing the attribute information of the second biological image by mapping adversarial noise 11 to the second biological image 10. Creates an adversarial style image (AS Image, 10a) that At this time, mapping the hostile noise 11 may be performed by the processor 111 .
적대적 노이즈(11)는 제 2생체 이미지(10)에서 라벨링된 조직(예, 망막, 신경섬유층, Cup, Disc, 등) 각각을 표현하는 R, G, B 픽셀의 계조 레벨(Gradation level)를 조절하는 값, R, G, B 픽셀의 색(Color)의 변화를 조절할 수 있는 값, 제 2 생체 이미지에서 국부적으로 명암비(Contrast Ratio)를 조절하는 값 등을 포함할 수 있고, 이에 한정되지 않고 제2 생체 이미지의 속성정보에 변화를 줄 수 있는 요소이면 어느 것이든 포함될 수 있다. The hostile noise 11 controls the gradation level of R, G, and B pixels representing each labeled tissue (eg, retina, nerve fiber layer, cup, disc, etc.) in the second living body image 10. values for adjusting the color of R, G, and B pixels, values for adjusting the contrast ratio locally in the second biometric image, and the like, but are not limited thereto. 2 Any element that can change the attribute information of the biometric image can be included.
상기 제2속성정보를 포함하는 적대적 스타일 이미지(10a)는 적대적 노이즈를1회 맵핑하여 생성될 수 있지만, 기계 학습모델에 기초하여 얻어진 제2속성정보(40)에 대해 나타나지는 예측값(Prediction Value)이 설정값(Set Value)에 수렴하도록 반복적으로 맵핑하여 생성될 수 있다. The hostile style image 10a including the second attribution information may be generated by mapping the hostile noise once, but a prediction value appearing for the second attribution information 40 obtained based on the machine learning model It can be generated by repeatedly mapping to converge to this set value.
맵핑 방법은 도시된 바와 같이, 설정값이 1일 경우, 기계 학습모델(13)에 기초하여 얻어진 제 2 생체 이미지(10)의 속성정보(15)에 따른 예측값이 0.58일 경우, 제2생체 이미지(10)에 대해 적대적 노이즈를 맵핑한 후, 동일 기계 학습모델에 기초하여 얻어진 적대적 스타일 이미지(10a)의 예측값(0.60, 0.66, 0.68,쪋)이 설정값 1에 수렴하도록 반복하여 맵핑한다. As shown in the drawing, in the mapping method, when the set value is 1 and the predicted value according to the attribute information 15 of the second biometric image 10 obtained based on the machine learning model 13 is 0.58, the second biometric image After mapping the hostile noise for (10), mapping is repeated so that the predicted values (0.60, 0.66, 0.68, ?) of the hostile style image 10a obtained based on the same machine learning model converge to the set value 1.
또한, 도시되어 있지 않지만, 설정값이 0일 경우, 기계 학습모델에 기초하여 얻어진 제 2 생체 이미지(10)의 예측값이 0.58일 경우, 제2생체 이미지에 대해 적대적 노이즈를 반복적으로 맵핑하여 적대적 스타일 이미지(10a)의 예측값(0.56, 0.54, 0.52,쪋)이 설정값 0에 수렴하도록 반복하여 맵핑한다. 좀더 자세하게 적대적 노이즈는 제2생체 이미지내에서 라벨링된 조직에 대해 맵핑할 수 있다. In addition, although not shown, when the set value is 0 and the predicted value of the second biometric image 10 obtained based on the machine learning model is 0.58, hostile noise is repeatedly mapped to the second biometric image to create an adversarial style. Mapping is repeated so that the predicted values (0.56, 0.54, 0.52, ) of the image 10a converge to the set value 0. In more detail, the hostile noise can be mapped to the labeled tissue in the second living body image.
이때, 설정값이 1이라는 의미는 거의 비정상에 가까운 생체 이미지로써, 질병(예, 녹내장, 골관절염 등)을 예측 및 진단할 수 있다는 소견이 있는 생체 이미지에 해당될 수 있고, 설정값이 0이라는 의미는 거의 정상에 가까운 생체 이미지로써, 질병을 예측 및 진단할 수 없다는 소견이 없는 생체 이미지에 해당될 수 있다. 따라서, 적대적 스타일 이미지(10a)는 설정값의 수에 따라 반복 재현되며 다양한 형태로 생성될 수 있다. 예를들어, 설정값을 0.1부터 0.3까지 지정된 제1단위 설정값, 0.31부터 0.6까지 지정된 제2단위 설정값, 0.61부터 0.9까지 지정된 제 3단위 설정값, 0.91부터 1.0까지 지정된 제4단위 설정값과 같이 복수의 단위 설정값 구간으로 나누어 적대적 스타일 이미지를 생성할 수 있다. At this time, the meaning that the set value is 1 is a biometric image that is almost abnormal, and may correspond to a biometric image that has a finding that diseases (eg, glaucoma, osteoarthritis, etc.) can be predicted and diagnosed, and the set value is 0. is an almost normal biometric image, and may correspond to a biometric image that does not show that a disease cannot be predicted or diagnosed. Accordingly, the hostile style image 10a may be repeatedly reproduced according to the number of set values and may be generated in various forms. For example, the first unit set value specified from 0.1 to 0.3, the second unit set value specified from 0.31 to 0.6, the third unit set value specified from 0.61 to 0.9, and the fourth unit set value specified from 0.91 to 1.0 An adversarial style image can be generated by dividing into a plurality of unit setting value intervals as shown in FIG.
이와 같이 기계 학습모델에 기초하여 제2생체 이미지(10)에 대해 적대적 노이즈를 맵핑하여 얻어진 적대적 스타일 이미지(10a)는 상술한 제2 서브기계학습 모델(30b)에 기초하여 프로세서(111)에 의해 제2생체 이미지(10)와 다시 융합되어 제3생체 이미지(50)가 생성된다. 생성된 제3 생체 이미지(50)는 디스플레이부에 표시되거나 후술할 디스플레이 어댑터(118)나 네트워크 어뎁터(119)와 같은 전송모듈을 통하여 외부 엔티티에 제공되거나 컴퓨팅 디바이스(110)에 인터넷 네트웍 망으로 연동되는 리모트 컴퓨팅 디바이스 혹은 타 장치에 제공될 수 있다.As such, the hostile style image 10a obtained by mapping the hostile noise to the second biological image 10 based on the machine learning model is generated by the processor 111 based on the above-described second sub-machine learning model 30b. The third biological image 50 is created by merging with the second biological image 10 again. The generated third biometric image 50 is displayed on the display unit, or provided to an external entity through a transmission module such as a display adapter 118 or a network adapter 119 to be described later, or linked to the computing device 110 through an Internet network. It may be provided to a remote computing device or other device that becomes a device.
제공된 제3생체 이미지(50)는 정상에 가까운 생체 이미지이거나 비정상에 가까운 생체 이미지인 제2속성정보가 반영되어 나타날 수 있기 때문에, 즉, 제 3생체 이미지(50)의 스타일에 변화를 자유롭게 줄 수 있기 때문에 엔티티로 하여금 제 1생체 이미지(1)를 판독할 때, 제3생체 이미지와 비교 판독이 쉽고, 비교 기준이 있어 판독의 결과에 대한 설명이 가능하고, 이에 따라 제1속성정보(1a)를 갖는 제1생체 이미지(1)의 판독에 대해 신뢰성을 높일 수 있다. Since the provided third biological image 50 may reflect the second attribute information, which is a biometric image close to normal or abnormal, the style of the third biological image 50 can be freely changed. Therefore, when the entity reads the first biological image 1, it is easy to compare and read the third biological image, and it is possible to explain the result of the reading because there is a comparison standard, and accordingly, the first attribute information 1a Reliability of reading the first living body image 1 having ? can be increased.
도 3은 본 발명의 일 실시예에 따른 생체 이미지 판독장치로 제1생체 이미지와 비교대상인 제3생체 이미지를 생성하는 과정을 예시적으로 나타낸 도이고, 도 4는 본 발명의 일 실시예에 따른 생체 이미지 판독장치에 의해 생성된 제2생체 이미지와 적대적 스타일를 예시적으로 나타낸 도이다. FIG. 3 exemplarily illustrates a process of generating a third biological image to be compared with a first biological image by a biometric image reading device according to an embodiment of the present invention. FIG. It is a diagram showing a second biometric image generated by the biometric image reading device and a hostile style by way of example.
도 3및 도4를 참조하면, 제2 생체 이미지(10)는 제1서브기계학습 모델(30a)의 스타일 기계학습 모델(Style Encoder, 31a)에 입력되어 프로세서(미도시)에 의해 제2 생체 이미지(10)로부터 스타일 데이터(Intermediate Style Data)를 생성할 수 있다. 이때, 생성된 스타일 데이터는 이미지 데이터 일 수 있다. 제2생체 이미지(10)는 도1에서 설명한 생체 이미지 판독장치(100)에 의해 촬영된 제1생체 이미지(1)이거나 스타일 데이터를 생성하기 위하여 제1생체 이미지(1)와 유사한 이미지(Fake image)일 수 있다. 제1서브기계학습 모델(30a)의 예측 기계학습 모델(Predictor, 31b)에 기초하여 프로세서(미도시)에 의해 스타일 데이터의 속성정보에 대한 예측값(Prediction value)을 추출할 수 있다. 속성정보는 상술한 생체 이미지에 대해 판독을 지원할 수 있는 정보이면 어느 것이든 가능하다. 이후, 속성정보에 대한 예측값을 설정값(Desired disease prediction value)과 비교하여 적대적 스타일 이미지(10a)를 생성할 수 있다. 적대적 스타일 이미지(10a)은 도2에서 상술한 내용과 동일하거나 유사한 방법으로 생성되고, 도4에서 나타난 바와 같이, 스타일 데이터(예, Style A, Style B)에 따라 다르게 표현될 수 있다. 3 and 4, the second biometric image 10 is input to the style machine learning model (Style Encoder, 31a) of the first sub machine learning model 30a, and the second biometric image 10 is encoded by a processor (not shown). Style data (Intermediate Style Data) may be generated from the image 10 . At this time, the generated style data may be image data. The second biological image 10 is the first biological image 1 captured by the biological image reading device 100 described in FIG. 1 or a fake image similar to the first biological image 1 to generate style data. ) can be. Based on the predictor 31b of the first sub-machine learning model 30a, a processor (not shown) may extract a prediction value for attribute information of the style data. The attribute information may be any information capable of supporting reading of the aforementioned biometric image. Thereafter, the adversarial style image 10a may be generated by comparing the predicted value of attribute information with a desired disease prediction value. The hostile style image 10a is generated in the same or similar manner as described above with reference to FIG. 2 and, as shown in FIG. 4 , may be expressed differently according to style data (eg, Style A and Style B).
적대적 스타일 이미지(10a)와 제2생체 이미지(10)는 제2서브기계학습 모델(30b)에 입력된다. 제2서브기계학습 모델(30b)은 제2생체 이미지(10)를 기반으로 적대적 스타일 이미지(10a)를 융합하여 다양한 제3생체 이미지(50)를 생성할 수 있다. 제2서브기계학습 모델(30b)은 딥러닝 기반의 CycleGAN, GANMOOK, StarGAN, Real-time Style Transfer 모델 학습 알고리즘, 학습 데이터의 양에 따라 Generative Adversarial Network 또는 Style Transfer모델을 포함하는 다양한 학습 알고리즘으로 구현될 수 있다. 일 실시 예에 따른 제2서브기계학습 모델(220)은 본 발명의 의해 제안되는 아이덴티티 손실 함수(identity loss function)를 이용하여 손실을 계산할 수 있다. 이러한 아이덴티티 손실 함수 (identity loss function)는 이미지데이터-스타일(Image data-style) 매핑시 의미적으로 유사한 영역을 선택함으로써 이미지 데이터의 구조를 훼손하지 않으면서 스타일 변환을 가능하게 할 수 있다.The hostile style image 10a and the second biometric image 10 are input to the second sub-machine learning model 30b. The second sub-machine learning model 30b may generate various third biological images 50 by fusing the hostile style image 10a based on the second biological image 10 . The second sub-machine learning model (30b) is implemented with various learning algorithms including CycleGAN, GANMOOK, StarGAN, Real-time Style Transfer model learning algorithm based on deep learning, and Generative Adversarial Network or Style Transfer model depending on the amount of training data. It can be. The second sub machine learning model 220 according to an embodiment may calculate a loss using an identity loss function proposed by the present invention. This identity loss function can enable style conversion without damaging the structure of image data by selecting a semantically similar region during image data-style mapping.
도 5는 본 발명의 일 실시예에 따라 생체 이미지의 판독을 지원하는 방법을 수행하는 생체 이미지 판독 지원장치의 예시적 구성을 개략적으로 나타낸 도이다.5 is a diagram schematically illustrating an exemplary configuration of an apparatus for supporting reading of a biometric image that performs a method of supporting reading of a biometric image according to an embodiment of the present invention.
도 5를 참조하면, 본 발명의 일실시예에 따른 생체 이미지 판독 장(100)는 컴퓨팅 디바이스(Computing Device, 110), 디스플레이 디바이스(Display Device, 130), 카메라(Camera, 150)를 포함할 수 있다. 컴퓨팅 디바이스(110)는 프로세서(Processor, 111), 메모리부(Memory Unit, 113), 스토리지 디바이스(Storage Device, 115), 입, 출력 인터페이스(117), 네트웍 어뎁터(Network Adapter, 118), 디스플레이 어뎁터(Display Adapter, 119), 및 프로세서를 포함한 다양한 시스템 구성요소를 메모리부(113)에 연결하는 시스템 버스(System bus, 112)를 포함할 수 있지만 이에 한정되지 않는다. 또한, 생체 이미지 판독 지원장치는 정보를 전달하기 위한 시스템 버스(112)뿐만 아니라 다른 통신 메커니즘을 포함할 수 있다. Referring to FIG. 5 , a biometric image reading device 100 according to an embodiment of the present invention may include a computing device 110, a display device 130, and a camera 150. have. The computing device 110 includes a processor 111, a memory unit 113, a storage device 115, an input/output interface 117, a network adapter 118, and a display adapter. (Display Adapter, 119), and a system bus (System bus, 112) that connects various system components including a processor to the memory unit 113, but is not limited thereto. In addition, the biometric image reading support device may include a system bus 112 for transferring information as well as other communication mechanisms.
시스템 버스 또는 다른 통신 메커니즘은, 프로세서, 컴퓨터 판독가능한 기록매체인 메모리, 근거리 통신 모듈(예를 들어, 블루투스나 NFC), 네트워크 인터페이스나 이동통신 모듈을 포함하는 네트워크 어뎁터, 디스플레이 디바이스(예를 들면, CRT 또는 LCD 등), 입력장치 (예를 들면, 키보드, 키패드, 가상 키보드, 마우스, 트랙볼, 스타일러스, 터치 감지 수단 등), 및/또는 하위 시스템들을 상호 접속한다. A system bus or other communication mechanism may include a processor, a memory that is a computer-readable recording medium, a short-range communication module (eg, Bluetooth or NFC), a network adapter including a network interface or mobile communication module, and a display device (eg, CRT or LCD, etc.), input devices (eg, keyboard, keypad, virtual keyboard, mouse, trackball, stylus, touch sensing means, etc.), and/or subsystems.
프로세서(111)는 기계 학습모델(13)을 활용하여 자동으로 프로세싱 하는 프로세싱 모듈일 수 있고, CPU, AP(Application Processor), 마이크로 컨트롤러, 등일 수 있으나 이에 한정되는 것은 아니다. The processor 111 may be a processing module that automatically processes using the machine learning model 13, and may be a CPU, an application processor (AP), a microcontroller, etc., but is not limited thereto.
프로세서(111)는 디스플레이 디바이스용 하드웨어 제어기 예를들어, 디스플레이 어뎁터(119)와 통신하여 디스플레이 디바이스(130) 상에 생체 이미지 판독 지원장치의 동작 및 유저 인터페이스를 표시할 수 있다.The processor 111 may display the operation of the biometric image reading support apparatus and the user interface on the display device 130 by communicating with the display adapter 119 , for example, a hardware controller for the display device.
프로세서(111)는 메모리부(113)에 접속하여 메모리부에 저장된 명령들이나 로직의 하나 이상의 시퀀스들을 실행하는 것에 의해 이후 설명될 본 발명의 실시예에 따른 생체 이미지 판독 지원장치의 동작을 제어한다. The processor 111 controls the operation of the biometric image reading support apparatus according to an embodiment of the present invention to be described later by accessing the memory unit 113 and executing one or more sequences of commands or logic stored in the memory unit.
이러한 명령들은, 정적 저장부(Static storage) 또는 디스크 드라이브와 같은 다른 컴퓨터 판독가능 기록매체로부터 메모리 안에서 판독될 수도 있다. 다른 실시형태들에서, 본 개시를 구현하기 위한 소프트웨어 명령들을 대신하거나 소프트웨어 명령들과 조합된 하드웨어에 내장된 회로부(hard-wired circuitry)가 사용될 수도 있다. 로직은, 프로세서로 명령들을 제공하는 데 참여하는 임의의 매체를 지칭할 수도 있으며, 메모리부(113)에 로딩될 수도 있다. These instructions may be read into memory from static storage or other computer readable media such as a disk drive. In other embodiments, hard-wired circuitry may be used in place of or combined with software instructions to implement the present disclosure. Logic may refer to any medium that participates in providing instructions to the processor and may be loaded into the memory unit 113 .
시스템 버스(System bus, 112)는 다양한 버스 구조(architectures) 중 임의의 것을 사용하는 메모리 버스 또는 메모리 컨트롤러, 주변장치버스, 가속 그래픽 포트 및 프로세서 혹은 로컬 버스를 포함하는 여러 가능한 유형의 버스 구조 중 하나 이상을 나타낸다. 예를 들어, 이런 구조들(architectures)은 ISA (Industry Standard Architecture) 버스, MCA(Micro Channel Architecture) 버스, EISA(Enhanced ISA)버스, VESA(Video Electronics Standard Association) 로컬 버스, AGP(Accelerated Graphics Port) 버스 및 PCI(Peripheral Component Interconnects), PCI-Express 버스, PCMCIA(Personal Computer Memory Card Industry Association), USB(Universal Serial Bus)과 같은 것을 포함할 수 있다. System bus 112 is one of several possible types of bus structures including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. indicates an abnormality. For example, these architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standard Association (VESA) local bus, Accelerated Graphics Port (AGP) Buses and Peripheral Component Interconnects (PCI), PCI-Express buses, Personal Computer Memory Card Industry Association (PCMCIA), and Universal Serial Bus (USB).
시스템 버스(System bus, 112)는 유, 무선 네트워크 연결로써 실행될 수 있다. 프로세서(Processor, 111), 대용량 스토리지 장치(Mass Storage Device), 오퍼레이팅 시스템(Operating System, 113c, 115a), 이미징 소프트웨어(Imaging Software, 113b, 115b), 이미징 데이터(Imaging Data, 113a, 115c), 네트워크 어뎁터(Network Adapter, 118), 시스템 메모리(System Memory), 입/출력 인터페이스(Input/Output Interface, 117), 디스플레이 어뎁터(Display Adapter, 119), 디스플레이 디바이스(Display Device, 130)를 포함하는 서브 시스템 각각은 물리적으로 분리된 위치에 후술할 하나 이상의 원격 컴퓨팅 디바이스(Remote Computing Device, 200, 300, 400) 안에 포함될 수 있고, 분산된 시스템을 효율적으로 실행하는데 있어 이와 같은 형태의 버스들을 통해 연결될 수 있다.The system bus 112 can be implemented as a wired or wireless network connection. Processor (111), mass storage device (Mass Storage Device), operating system (Operating System, 113c, 115a), imaging software (Imaging Software, 113b, 115b), imaging data (Imaging Data, 113a, 115c), network A subsystem including an adapter (Network Adapter, 118), system memory, input/output interface (Input/Output Interface, 117), display adapter (119), and display device (Display Device, 130) Each may be included in one or more remote computing devices (Remote Computing Devices, 200, 300, 400) to be described later at physically separate locations, and may be connected through these types of buses to efficiently run a distributed system. .
버스의 배선들(wires)을 포함하는 송신 매체들은 동축 케이블, 동선(copper wire), 및 광섬유들을 포함할 수 있다. 일 예에서, 송신 매체들은, 라디오 파 통신이나 적외선 데이터 통신 동안 생성된 음파 또는 광파의 형태를 취할 수도 있다.Transmission media including the wires of the bus may include coaxial cable, copper wire, and optical fibers. In one example, transmission media may take the form of acoustic or light waves generated during radio wave communication or infrared data communication.
본 발명의 실시예에 따른 생체 이미지 판독 지원장치(100)는, 네트워크 링크 및 네트워크 어뎁터(Network Adapter, 118)를 통해 메시지들, 데이터, 정보 및 하나 이상의 프로그램들(즉, 애플리케이션 코드)을 포함하는 명령들을 송신하고 수신할 수도 있다. The apparatus 100 for supporting biometric image reading according to an embodiment of the present invention includes messages, data, information, and one or more programs (ie, application code) through a network link and a network adapter 118. Commands may be sent and received.
네트워크 어뎁터(Network Adapter, 118)는, 네트워크 링크를 통한 송수신을 가능하게 하기 위한, 별개의 또는 통합된 안테나를 포함할 수도 있다. 네트워크 어뎁터(118)는 네트워크에 접속하여 원격 컴퓨팅 장치(Remote Computing Device, 200, 300, 400)와 통신할 수 있다. 네트워크는 LAN, WLAN, PSTN, 및 셀룰러 폰 네트워크 중 적어도 하나를 포함할 수 있으나 이에 한정되는 것은 아니다.The network adapter 118 may include a separate or integrated antenna for enabling transmission and reception over a network link. The network adapter 118 may access a network and communicate with the remote computing device (Remote Computing Device, 200, 300, 400). The network may include, but is not limited to, at least one of a LAN, WLAN, PSTN, and cellular phone network.
네트워크 어뎁터(118)는 상기 네트워크에 접속하기 위한 네트워크 인터페이스 및 이동통신 모듈 중 적어도 하나를 포함할 수 있다. 이동통신 모듈은 세대별 이동통신망(예를 들어, 2G 내지 5G 이동통신망)에 접속가능하다. The network adapter 118 may include at least one of a network interface and a mobile communication module for accessing the network. The mobile communication module can access a mobile communication network for each generation (eg, 2G to 5G mobile communication networks).
프로그램 코드는 수신될 때 프로세서(111)에 의해 실행될 수도 있고/있거나 실행을 위해 메모리부(113)의 디스크 드라이브 또는 디스크 드라이브와는 다른 종류의 비휘발성 메모리에 저장될 수도 있다.The program code may be executed by the processor 111 when received and/or may be stored for execution in a disk drive of the memory unit 113 or in a non-volatile memory of a type other than the disk drive.
컴퓨팅 디바이스(Computing device, 110)는 다양한 컴퓨터 판독가능한 기록매체일 수 있다. 판독가능한 매체는 컴퓨팅 디바이스에 의해 접근 가능한 임의의 다양한 매체가 될 수 있고, 예를들어, 휘발성(volatile) 또는 비휘발성 매체(non-volatile media), 유동 매체(removable media), 비유동 매체(non-removablemedia)를 포함할 수 있지만 이에 한정되지 않는다. The computing device 110 may be a variety of computer readable recording media. A readable medium can be any of a variety of media that can be accessed by a computing device, including, for example, volatile or non-volatile media, removable media, and non-removable media. removable media), but is not limited thereto.
메모리부(113)는 본 발명의 실시예에 따른 생체 이미지 판독 지원장치의 동작에 필요한 운영체제, 드라이버, 애플리케이션 프로그램, 데이터 및 데이터 베이스 등을 저장할 수 있으나 이에 한정되는 것은 아니다. 또한, 메모리부(113)는RAM(Random Acces Memory)과 같은 휘발성 메모리, ROM(Read Only Memory) 및 플래시 메모리와 같은 비휘발성 메모리 형태로 컴퓨터 판독 가능한 매체를 포함할 수 있고, 또한, 디스크 드라이브 예를들면, 하드 디스크 드라이브(Hard Disk Drive), 솔리드 스테이트 드라이브(Solid State Drive), 광 디스크 드라이브 등을 포함할 수 있지만 이에 한정되지 않는다. 또한, 메모리부(113)와 스토리지 디바이스(115)는 각각 전형적으로 대상체의 생체 이미지와 같은 이미징 데이터(Imaging Data, 113a, 115a)와 같은 데이터, 프로세서(111)에 의해 동작되도록 즉시 접속될 수 있는 이미징 소프트웨어(113b, 115b)와 오퍼레이팅 시스템(113c, 115c)과 같은 프로그램 모듈을 포함할 수 있다. The memory unit 113 may store an operating system, driver, application program, data, and database necessary for the operation of the device for supporting biometric image reading according to an embodiment of the present invention, but is not limited thereto. In addition, the memory unit 113 may include computer readable media in the form of volatile memory such as RAM (Random Access Memory), ROM (Read Only Memory) and non-volatile memory such as flash memory, and may also include a disk drive. For example, it may include, but is not limited to, a hard disk drive, a solid state drive, an optical disk drive, and the like. In addition, the memory unit 113 and the storage device 115 typically store data such as imaging data (Imaging Data, 113a, 115a) such as a biometric image of an object, which can be immediately accessed to be operated by the processor 111. It may include program modules such as imaging software 113b and 115b and operating systems 113c and 115c.
기계 학습모델(13)은 프로세서, 메모리부(113) 혹은 스토리지 디바이스(115)에 삽입될 수 있다. 이때의 기계 학습모델은 기계학습 알고리즘의 하나인 심층 신경망(Deep Neural Network, DNN), 합성곱 신경망(Convolutional Neural Network, CNN), 순환 신경망(Recurrent Neural Network, RNN) 등을 포함할 수 있고, 이에 한정되지 않는다.The machine learning model 13 may be inserted into a processor, a memory unit 113 or a storage device 115 . The machine learning model at this time may include a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), etc., which are one of the machine learning algorithms. Not limited.
심층 신경망(Deep Neural Network, DNN)은 입력층(input layer)과 출력층(output layer) 사이에 여러 개의 은닉층(hidden layer)들로 이뤄진 인공신경망(Artificial Neural Network, ANN)이다. 심층 신경망(DNN)은 일반적인 인공신경망과 마찬가지로 복잡한 비선형 관계(non-linear relationship)들을 모델링할 수 있다. 예를 들어, 물체 식별 모델을 위한 심층 신경망 구조에서는 각 물체가 영상의 기본적 요소들의 계층적 구성으로 표현될 수 있다. 이때, 추가 계층들은 점진적으로 모여진 하위 계층들의 특징들을 규합시킬 수 있다. 심층 신경망의 이러한 특징은, 비슷하게 수행된 인공신경망에 비해 더 적은 수의 유닛(unit, node)들만으로도 복잡한 데이터를 모델링할 수 있게 해준다. A deep neural network (DNN) is an artificial neural network (ANN) consisting of several hidden layers between an input layer and an output layer. A deep neural network (DNN) can model complex non-linear relationships, just like a general artificial neural network. For example, in a deep neural network structure for an object identification model, each object may be represented as a hierarchical composition of basic elements of an image. In this case, the additional layers may consolidate the characteristics of the gradually gathered lower layers. This feature of deep neural networks allows complex data to be modeled with fewer units (units, nodes) compared to similarly performed artificial neural networks.
합성곱신경망(CNN)은 최소한의 전처리(preprocess)를 사용하도록 설계된 다계층 퍼셉트론(multilayer perceptrons)의 한 종류이다. 합성곱 신경망(CNN)은 하나 또는 여러개의 합성곱 계층과 그 위에 올려진 일반적인 인공 신경망 계층들로 이루어져 있으며, 가중치와 통합 계층(pooling layer)들을 추가로 활용한다. 이러한 구조 덕분에 합성곱 신경망(CNN)은 2차원 구조의 입력 데이터를 충분히 활용할 수 있다. 다른 딥 러닝 구조들과 비교해서, 합성곱 신경망(CNN)은 영상, 음성 분야 모두에서 좋은 성능을 보여준다. 합성곱 신경망(CNN)은 또한 표준 역전달을 통해 훈련될 수 있다. 합성곱 신경망(CNN)은 다른 피드포워드 인공신경망 기법들보다 쉽게 훈련되는 편이고 적은 수의 매개변수를 사용한다는 이점이 있다.A convolutional neural network (CNN) is a type of multilayer perceptron designed to use minimal preprocessing. A convolutional neural network (CNN) consists of one or several convolutional layers and general artificial neural network layers placed on top of them, and additionally utilizes weights and pooling layers. Thanks to this structure, convolutional neural networks (CNNs) can fully utilize input data with a two-dimensional structure. Compared to other deep learning architectures, convolutional neural networks (CNNs) show good performance in both video and audio fields. Convolutional Neural Networks (CNNs) can also be trained via standard back-propagation. Convolutional neural networks (CNNs) are easier to train than other feedforward artificial neural network techniques and have the advantage of using fewer parameters.
딥 러닝에서는 합성곱 심층 신뢰 신경망(Convolutional Deep Belief Network, CDBN)가 개발되었는데, 기존 합성곱 신경망(CNN)과 구조적으로 매우 비슷해서, 2차원 구조를 잘 이용할 수 있으며 그와 동시에 심층 신뢰신경망(Deep Belief Network, DBN)에서의 선훈련에 의한 장점도 취할 수 있다. 합성곱 심층 신뢰 신경망(CDBN)은 다양한 영상과 신호 처리 기법에 사용될 수 있는 일반적인 구조를 제공하며 CIFAR와 같은 표준 이미지 데이터에 대한 여러 벤치마크 결과에 사용되고 있다. In deep learning, a Convolutional Deep Belief Network (CDBN) has been developed. It is structurally very similar to the existing Convolutional Neural Network (CNN), so it can use the two-dimensional structure well, and at the same time, the Deep Belief Network (Deep Belief Network) You can also take advantage of pre-training in Belief Network (DBN). Convolutional deep neural networks (CDBNs) provide a general structure that can be used in a variety of image and signal processing techniques, and are used in several benchmark results for standard image data such as CIFAR.
순환 신경망(Recurrent Neural Network, RNN)은 인공신경망을 구성하는 유닛 사이의 연결이 다이렉티드 사이클(directed cycle)을 구성하는 신경망을 말한다. 순환 신경망은 임의의 입력을 처리하기 위해 신경망 내부의 메모리를 활용할 수 있다. 이러한 특성에 의해 순환 신경망은 필기체 인식(Handwriting recognition)과 같은 분야에 활용되고 있고, 높은 인식률을 나타낸다.A recurrent neural network (RNN) refers to a neural network in which connections between units constituting an artificial neural network constitute a directed cycle. A recurrent neural network can utilize memory inside the neural network to process arbitrary inputs. Due to these characteristics, the recurrent neural network is used in fields such as handwriting recognition and shows a high recognition rate.
카메라부(150)는 오브젝트의 이미지를 촬상하고 그 이미지를 광전자적으로 이미지 신호로 변환하는 이미지 센서(미도시)를 포함하고, 대상체의 생체 이미지를 촬영한다. 촬영된 대상체의 생체 이미지는 입/출력 인터페이스(117)를 통하여 프로세서(111)에 제공되어 기계 학습모델(13)에 기초하여 처리되거나 메모리부(113) 혹은 스토리지 디바이스(115)에 저장될 수 있다. 또한, 촬영된 대상체의 생체 이미지는 인터넷 네트웍을 통하여 후술할 원격 컴퓨팅 디바이스(200, 300, 400)에 제공될 수 있다.The camera unit 150 includes an image sensor (not shown) that captures an image of an object and photoelectrically converts the image into an image signal, and captures a biometric image of the object. The captured biometric image of the object may be provided to the processor 111 through the input/output interface 117 and processed based on the machine learning model 13 or stored in the memory unit 113 or the storage device 115. . In addition, the photographed biometric image of the object may be provided to the remote computing devices 200, 300, and 400 to be described later through an Internet network.
도 6은 본 발명의 일 실시예에 따라 생체 이미지의 판독을 지원하는 방법을 수행하는 생체 이미지 판독 지원 시스템을 개략적으로 나타낸 도이다.6 is a diagram schematically illustrating a biometric image reading support system that performs a method of supporting reading of a biometric image according to an embodiment of the present invention.
도 6을 참조하면, 본 발명의 일실시예에 따른 생체 이미지 판독 지원시스템(500)은 컴퓨팅 디바이스(Computing Device, 310), 디스플레이 디바이스(Display Device, 330), 카메라(Camera, 350) 및 하나 이상의 원격 컴퓨팅 디바이스(Remote Compting Device, 200, 300, 400)를 포함할 수 있다. 컴퓨팅 디바이스(310)과 원격 컴퓨팅 디바이스들(200, 300, 400) 간은 인터넷 네트웍으로 서로 연결될 수 있다. 컴퓨팅 디바이스(Computing Device, 310)에 포함된 구성요소들은 전술한 도 5에서 대응되는 구성요소와 유사하여 그에 대한 동작 및 기능에 대한 설명은 생략하기로 한다. 또한, 원격 컴퓨팅 디바이스(200, 300, 400) 각각에 포함된 구성요소들은 컴퓨팅 디바이스(310)의 구성요소와 유사하다. Referring to FIG. 6 , a biometric image reading support system 500 according to an embodiment of the present invention includes a computing device 310, a display device 330, a camera 350, and one or more It may include a remote computing device (Remote Compting Device, 200, 300, 400). The computing device 310 and the remote computing devices 200, 300, and 400 may be connected to each other through an Internet network. Components included in the computing device (Computing Device) 310 are similar to the corresponding components in FIG. 5 described above, so descriptions of operations and functions thereof will be omitted. Also, components included in each of the remote computing devices 200 , 300 , and 400 are similar to components of the computing device 310 .
컴퓨팅 디바이스(310) 및 원격 컴퓨팅 디바이스(200, 300, 400)은 본 발명의 실시예에서 제시된 방법, 기능 및/또는 동작 중 하나 이상을 수행할 수 있도록 구성될 수 있다. 이러한 컴퓨팅 디바이스(310, 200, 300, 400)는 적어도 하나의 컴퓨팅 디바이스에서 동작하는 어플리케이션을 포함할 수 있다. 또한, 컴퓨팅 디바이스(310, 200, 300, 400)는 하나 이상의 컴퓨터와 하나 이상의 데이터 베이스를 포함할 수 있고, 단일 장치, 분산장치, 클라우드 기반 컴퓨터 또는 이들의 조합일 수 있다. The computing device 310 and the remote computing devices 200, 300, and 400 may be configured to perform one or more of the methods, functions, and/or operations presented in the embodiments of the present invention. These computing devices 310, 200, 300, and 400 may include an application running on at least one computing device. In addition, computing devices 310, 200, 300, and 400 may include one or more computers and one or more databases, and may be single devices, distributed devices, cloud-based computers, or combinations thereof.
본 발명에 따른 생체 이미지 판독장치는 랩탑 컴퓨터, 데스크탑 컴퓨터 및 서버에 제한되지 않고, 데이터를 처리할 수 있는 임의의 명령을 실행할 수 있는 컴퓨팅 장치 또는 시스템에서 구현될 수 있고, 인터넷 네트웍을 통한 다른 컴퓨팅 장치 및 시스템으로 구현될 수 있다. 또한, 생체 이미지 판독 장치는 펌웨어를 포함하는 소프트웨어, 하드웨어 또는 이들의 조합을 포함하는 다양한 방식으로 구현될 수 있다. 예를 들어, 다양한 방식으로 실행하기 위한 기능은 개별로직 구성요소, 하나이상의 ASIC(Application Specific Integrated Circuits) 및/또는 프로그램 제어 프로세서를 포함하는 다양한 방식으로 구현되는 구성요소에 의해 수행될 수 있다.The biometric image reading device according to the present invention is not limited to laptop computers, desktop computers and servers, and can be implemented in a computing device or system capable of processing data and executing arbitrary commands, and can be used for other computing devices through an Internet network. It can be implemented as devices and systems. In addition, the biometric image reading device may be implemented in various ways including software including firmware, hardware, or a combination thereof. For example, functions to execute in various ways may be performed by components implemented in various ways, including discrete logic components, one or more application specific integrated circuits (ASICs), and/or program control processors.
도 7은 본 발명의 일 실시예에 따른 생체 이미지의 판독 지원장치의 디스플레이부에 표시된 화면을 나타낸 도이다.7 is a diagram illustrating a screen displayed on a display unit of an apparatus for supporting reading of a biometric image according to an embodiment of the present invention.
도7를 참조하면, 디스플레이부(130)에 표시된 화면은 생체 이미지(예, 제3생체 이미지, 50)의 속성정보를 조절할 수 있는 그래픽 유저 인터페이스(Graphic User Interface; GUI) 화면으로 제공될 수 있고, 그래픽 유저 인터페이스 화면을 통해 사용자에 의해 생성된 생체 이미지의 속성정보를 획득할 수 있다. 디스플레이된 화면은 생체 이미지 화면부(130a)와 이미지 스타일 조절 화면부(130b)를 포함할 수 있다. 생체 이미지 화면부(130a)에는 속성정보를 기초로 판독 및 진단하기 위한 제1생체 이미지(1)와 제1생체 이미지(1)와 비교 판단하기 위한 제3생체 이미지(50)가 표시될 수 있다.Referring to FIG. 7 , the screen displayed on the display unit 130 may be provided as a graphical user interface (GUI) screen capable of adjusting attribute information of a biological image (eg, a third biological image, 50). , Property information of the biometric image created by the user may be obtained through the graphic user interface screen. The displayed screen may include a biometric image screen unit 130a and an image style control screen unit 130b. A first biological image 1 for reading and diagnosing based on attribute information and a third biological image 50 for comparison and determination with the first biological image 1 may be displayed on the biometric image display unit 130a. .
이미지 스타일 조절화면부(130b)에는 제2생체 이미지(미도시)의 스타일을 조절할 수 있는 스크롤 바(131)가 표시될 수 있다. 스타일(적대적 스타일 이미지)은 도 2 내지 도4에서 앞서 상술한 방법으로 다양하게 생성될 수 있고, 다양한 스타일은 각각의 속성정보를 보유하고 있을 수 있다. 이때, 속성정보는 스타일의 종류에 따라 라벨링된 조직의 속성정보들 일 수 있고, 속성정보 조절에 따라 제2생체 이미지(미도시)의 스타일이 변화될 수 있다. 일 실시예로, 도시된 바와 같이, 제2생체 이미지가 안저 이미지일 경우, 속성정보는C/D ratio, Retinal Nerve Fiber Layer, Disc Rim Thinning 등을 포함하여 다양한 정보가 스타일 조절 화면(130b)에 표시될 수 있다. A scroll bar 131 for adjusting the style of the second biological image (not shown) may be displayed on the image style adjusting screen unit 130b. Styles (adversarial style images) may be created in various ways by the methods described above in FIGS. 2 to 4 , and various styles may have respective attribute information. At this time, the attribute information may be tissue attribute information labeled according to the type of style, and the style of the second biological image (not shown) may be changed according to the adjustment of the attribute information. As an embodiment, as shown, when the second living body image is an fundus image, various information including C/D ratio, retinal nerve fiber layer, disc rim thinning, and the like as attribute information are displayed on the style adjustment screen 130b. can be displayed
이와 같이, 본 발명에 따른 생체 이미지 판독 장치(100)에 의해 제2생체 이미지의 스타일(속성정보)을 자유롭게 조절하여 표현되는 제3 생체 이미지(50)와 판독 대상이 되는 제1 생체 이미지(1)를 디스플레이부(130)의 화면에 표시하게 되면, 제1생체 이미지(1)를 제3생체 이미지(50)와 비교 판독할 수 있기 때문에 더욱 신뢰성 있는 판독이 가능하고, 판독의 결과에 대해 논리적으로 설명 가능할 수 있어 정확한 판독이 이루어질 수 있다. In this way, the third biometric image 50 expressed by freely adjusting the style (attribute information) of the second biometric image by the biometric image reading device 100 according to the present invention and the first biometric image 1 to be read ) is displayed on the screen of the display unit 130, since the first biological image 1 can be compared and read with the third biological image 50, more reliable reading is possible, and the reading result is logical. can be explained, so accurate reading can be made.
도 8은 본 발명의 일 실시예에 따른 생체 이미지 판독 지원 방법을 예시적으로 나타낸 흐름도이다.8 is a flowchart exemplarily illustrating a biometric image read support method according to an embodiment of the present invention.
도 8를 참조하면, S810단계에서, 외부 기기장치 예를 들어, 카메라(Camera) 혹은 컴퓨팅 장치(110)와 연동되는 타장치(미도시)로부터 대상체의 제1생체 이미지를 획득하면, 프로세서(111)는 제1기계 학습모델(3)에 기초하여 제1생체 이미지로부터 제1속성정보를 추출한다. 이때, 제1생체 이미지는 메모리부(113)나 스토리지 디바이스(115)에 저장될 수 있다. Referring to FIG. 8 , in step S810, when a first biological image of an object is acquired from an external device, for example, a camera or another device (not shown) interworking with the computing device 110, the processor 111 ) extracts the first attribute information from the first biological image based on the first machine learning model (3). In this case, the first biological image may be stored in the memory unit 113 or the storage device 115 .
S920 단계에서, 제2생체 이미지(10)는 제2기계학습 모델(30)에 기초하여 스타일(Style)이 변형되고, 프로세서(111)에 의해 제3생체 이미지(50)가 생성된다. 이때, 제2 생체 이미지(10)는 제1생체 이미지와 쌍을 이루는 이미지이고, 제2생체 이미지(10)는 제2기계학습 모델(30)의 제1서브기계학습 모델(30a)에 기초하여 프로세서(111)에 의해 스타일이 변형되고, 이에 따라 제2생체 이미지(10)로부터 복수의 적대적 스타일 이미지(Adversarial Style image, 10a)가 생성될 수 있다. 또한, 제2서브기계학습 모델(30b)는 복수의 적대적 스타일 이미지(10a) 중 어느 하나를 선택하고, 입력된 제2생체 이미지(10)을 융합(Fusion & Aggregation)하여 제3생체 이미지(50)를 생성할 수 있다. 이에 따라 제3 생체 이미지(50)에는 제1생체 이미지(1)의 제1속성정보(1a)와 비교가능한 제2속성정보가 반영될 수 있다.In step S920, the style of the second biological image 10 is modified based on the second machine learning model 30, and the third biological image 50 is generated by the processor 111. At this time, the second biometric image 10 is an image paired with the first biometric image, and the second biometric image 10 is based on the first sub machine learning model 30a of the second machine learning model 30. The style is modified by the processor 111, and accordingly, a plurality of adversarial style images 10a may be generated from the second biological image 10. In addition, the second sub-machine learning model 30b selects one of the plurality of adversarial style images 10a and fuses the inputted second biological image 10 (Fusion & Aggregation) to form the third biological image 50. ) can be created. Accordingly, second attribute information comparable to the first attribute information 1a of the first biological image 1 may be reflected in the third biometric image 50 .
S930 단계에서, 제1속성정보를 갖는 제1 생체 이미지와 제2속성정보를 갖는 제3 생체 이미지를 디스플레이부에 디스플레이 한다. 디스플레이된 화면은 생체 이미지 화면부(130a)와 이미지 스타일 조절 화면부(130b)를 포함할 수 있다. 생체 이미지 화면부(130a)에는 속성정보를 기초로 판독 및 진단하기 위한 제1생체 이미지(1)와 제1생체 이미지(1)와 비교 판단하기 위한 제3생체 이미지(50)가 표시될 수 있고, 이미지 스타일 조절화면부(130b)에는 제2생체 이미지(미도시)의 스타일을 조절할 수 있는 스크롤 바(131)가 표시될 수 있다. In step S930, the first biometric image having the first attribute information and the third biometric image having the second attribute information are displayed on the display unit. The displayed screen may include a biometric image screen unit 130a and an image style control screen unit 130b. On the biometric image display unit 130a, a first biometric image 1 for reading and diagnosing based on attribute information and a third biometric image 50 for comparison and determination with the first biometric image 1 may be displayed. , A scroll bar 131 for adjusting the style of the second biological image (not shown) may be displayed on the image style adjustment screen 130b.
이와 같이 본 발명에 따른 생체 이미지 판독 지원 장치 및 방법은 기계 학습모델에 기초한 생체 이미지를 판독하는데 있어서, 설명 가능한 생체 이미지 판독을 지원할 수 있어 의료진으로 하여금 더욱더 신뢰성이 있는 생체 이미지 판독을 가능하게 할 수 있다. As described above, the device and method for supporting biometric image reading according to the present invention can support explanatory biometric image reading in reading a biometric image based on a machine learning model, enabling medical staff to read biometric images more reliably. have.
위 실시예에서와 같이, 본 발명이 소프트웨어 및 하드웨어의 결합을 통하여 달성되거나 하드웨어만으로 달성될 수 있다는 점을 명확하게 이해할 수 있다. 본 발명의 기술적 해법의 대상물 또는 선행 기술들에 기여하는 부분들은 다양한 컴퓨터 구성요소를 통하여 수행될 수 있는 프로그램 명령어의 형태로 구현되어 기계 판독 가능한 기록 매체에 기록될 수 있다. 상기 기계 판독 가능한 기록 매체는 프로그램 명령어, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 상기 기계 판독 가능한 기록 매체에 기록되는 프로그램 명령어는 본 발명을 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 분야의 통상의 기술자에게 공지되어 사용 가능한 것일 수도 있다. As in the above embodiment, it can be clearly understood that the present invention can be achieved through a combination of software and hardware or can be achieved only by hardware. Objects of the technical solution of the present invention or parts contributing to the prior art may be implemented in the form of program instructions that can be executed through various computer components and recorded on a machine-readable recording medium. The machine-readable recording medium may include program commands, data files, data structures, etc. alone or in combination. Program instructions recorded on the machine-readable recording medium may be specially designed and configured for the present invention, or may be known and usable to those skilled in the art of computer software.
기계 판독 가능한 기록 매체의 예에는, 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체, CD-ROM, DVD와 같은 광기록 매체, 플롭티컬디스크(floptical disk)와 같은 자기-광 매체(magneto-optical media), 및 ROM, RAM, 플래시 메모리 등과 같은 프로그램 명령어를 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 프로그램 명령어의 예에는, 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드도 포함된다.Examples of machine-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and magneto-optical media such as floptical disks. media), and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like. Examples of program instructions include high-level language codes that can be executed by a computer using an interpreter or the like as well as machine language codes such as those produced by a compiler.
상기 하드웨어 장치는 본 발명에 따른 처리를 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있으며, 그 역도 마찬가지이다. 상기 하드웨어 장치는, 프로그램 명령어를 저장하기 위한 ROM/RAM 등과 같은 메모리와 결합되고 상기 메모리에 저장된 명령어들을 실행하도록 구성되는 CPU나 GPU와 같은 프로세서를 포함할 수 있으며, 외부 장치와 신호를 주고받을 수 있는 통신부를 포함할 수 있다. 덧붙여, 상기 하드웨어 장치는 개발자들에 의하여 작성된 명령어들을 전달받기 위한 키보드, 마우스, 기타 외부 입력장치를 포함할 수 있다.The hardware device may be configured to act as one or more software modules to perform processing according to the present invention and vice versa. The hardware device may include a processor such as a CPU or GPU coupled to a memory such as ROM/RAM for storing program instructions and configured to execute instructions stored in the memory, and may transmit and receive signals with external devices. A communication unit may be included. In addition, the hardware device may include a keyboard, mouse, and other external input devices for receiving commands written by developers.
이상과 같이 본 발명에 따른 실시예를 살펴보았으며, 앞서 설명된 실시예 이외에도 본 발명이 그 취지나 범주에서 벗어남이 없이 다른 특정 형태로 구체화될 수 있다는 사실은 해당 기술에 통상의 지식을 가진 이들에게는 자명한 것이다. 그러므로, 상술된 실시예는 제한적인 것이 아니라 예시적인 것으로 여겨져야 하고, 이에 따라 본 발명은 상술한 설명에 한정되지 않고 첨부된 청구항의 범주 및 그 동등 범위 내에서 변경될 수도 있다.As described above, the embodiments according to the present invention have been reviewed, and the fact that the present invention can be embodied in other specific forms in addition to the above-described embodiments without departing from the spirit or scope is apparent to those skilled in the art. It is self-evident to Therefore, the embodiments described above are to be regarded as illustrative rather than restrictive, and thus the present invention is not limited to the above description, but may vary within the scope of the appended claims and their equivalents.

Claims (7)

  1. 프로세서; 및processor; and
    상기 프로세서에 의해 수행되도록 구현된 하나 이상의 명령어(instructions)를 포함하는 메모리를 포함하고, 상기 프로세서는, a memory comprising one or more instructions implemented to be executed by the processor, the processor comprising:
    제1기계학습 모델에 기초하여 제1생체 이미지로부터 제1속성정보를 추출하고,Extracting first attribute information from a first biological image based on a first machine learning model;
    제2기계학습 모델에 기초하여 상기 제1생체 이미지와 쌍을 이루는 제2생체 이미지의 스타일을 변형하여 상기 제1속성정보와 비교가능한 제2속성정보를 갖는 제3 생체 이미지를 생성하고,generating a third biological image having second attribute information comparable to the first attribute information by modifying a style of a second biological image paired with the first biological image based on a second machine learning model;
    상기 제1속성정보를 갖는 제1생체 이미지와 상기 제2 속성정보를 갖는 제3생체 이미지를 디스플레이부에 디스플레이 하는 생체 이미지 판독 지원 장치.An apparatus for supporting biometric image reading that displays a first biological image having the first attribute information and a third biological image having the second attribute information on a display unit.
  2. 제 1항에 있어서,According to claim 1,
    상기 제2기계학습 모델은,The second machine learning model,
    제1서브기계학습 모델과 제2서브기계학습 모델을 포함하고, 상기 제1서브기계학습 모델에 기초하여 상기 제2생체 이미지는 스타일이 변형되어 적대적 스타일 이미지로 생성되는 것을 특징으로 하는 생체 이미지 판독 지원 장치.a first sub-machine learning model and a second sub-machine learning model, wherein based on the first sub-machine learning model, the second biometric image is style-transformed and generated as an adversarial style image. support device.
  3. 제 2항에 있어서,According to claim 2,
    상기 스타일은,The style is
    상기 제2생체 이미지에 적대적 노이즈가 맵핑되어 변형되는 것을 특징으로 하는 생체 이미지 판독 지원장치.An apparatus for supporting biometric image reading, characterized in that hostile noise is mapped to and transformed into the second biometric image.
  4. 제3항에 있어서,According to claim 3,
    상기 적대적 노이즈는,The hostile noise,
    상기 제 2생체 이미지의 R, G, B 픽셀의 계조 레벨(Gradation level)를 조절하는 값, 상기 제2생체 이미지의 R, G, B 픽셀의 색(Color)의 변화를 조절할 수 있는 값, 상기 제 2생체 이미지의 명암비(Contrast Ratio)를 조절하는 값 중 적어도 어느 하나를 포함하는 것을 특징으로 하는 생체 이미지 판독 지원장치.a value for adjusting gradation levels of R, G, and B pixels of the second living body image, a value for adjusting color change of R, G, and B pixels of the second living body image; An apparatus for supporting biometric image reading, characterized in that it includes at least one of values adjusting a contrast ratio of the second biometric image.
  5. 제 2항에 있어서,According to claim 2,
    상기 제2서브기계학습 모델은,The second sub machine learning model,
    상기 제2생체 이미지를 기반으로 상기 적대적 스타일 이미지를 융합하여 상기 제3생체 이미지를 생성하는 것을 특징으로 하는 생체 이미지 판독 지원장치. and generating the third body image by fusing the hostile style image based on the second body image.
  6. 제 1항에 있어서,According to claim 1,
    상기 디스플레이부에 표시된 화면은 상기 제3생체 이미지가 디스플레이되는 생체 이미지 화면부와 상기 제2생체 이미지의 스타일을 조절할 수 있는 스크롤바가 표시된 이미지 스타일 조절화면부를 포함하는 것을 특징으로 하는 생체 이미지 판독 지원장치.The screen displayed on the display unit includes a biometric image screen for displaying the third biometric image and an image style adjusting screen for displaying a scroll bar for adjusting the style of the second biometric image. .
  7. 프로세서에 의해 대상체의 생체 이미지 판독을 지원하는 방법에 있어서,A method for supporting reading of a biometric image of an object by a processor, the method comprising:
    제1기계학습 모델에 기초하여 상기 대상체의 제1생체 이미지로부터 제1속성정보를 추출하는 단계;extracting first attribute information from a first biological image of the object based on a first machine learning model;
    제2기계학습 모델에 기초하여 상기 제1생체 이미지와 쌍을 이루는 제2생체 이미지의 스타일을 변형하여 상기 제1속성정보와 비교가능한 제2속성정보를 갖는 제3 생체 이미지를 생성하는 단계; 및generating a third biological image having second attribute information comparable to the first attribute information by modifying a style of a second biological image paired with the first biological image based on a second machine learning model; and
    상기 제1속성정보를 갖는 제1생체 이미지와 상기 제2 속성정보를 갖는 제3생체 이미지를 디스플레이 하는 단계 Displaying a first biological image having the first attribute information and a third biological image having the second attribute information
    를 포함하는 생체 이미지 판독 지원 방법.Biometric image reading support method comprising a.
PCT/KR2022/006536 2021-05-31 2022-05-09 Exponible biometric image reading support device and method WO2022255674A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210069684A KR20220161672A (en) 2021-05-31 2021-05-31 Explainable apparatus and method for supporting reading of biometric imageme
KR10-2021-0069684 2021-05-31

Publications (1)

Publication Number Publication Date
WO2022255674A1 true WO2022255674A1 (en) 2022-12-08

Family

ID=84323397

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/006536 WO2022255674A1 (en) 2021-05-31 2022-05-09 Exponible biometric image reading support device and method

Country Status (2)

Country Link
KR (1) KR20220161672A (en)
WO (1) WO2022255674A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101957811B1 (en) * 2018-08-07 2019-03-13 주식회사 뷰노 Method for computing severity with respect to dementia of subject based on medical image and apparatus using the same
JP2019159958A (en) * 2018-03-14 2019-09-19 オムロン株式会社 Face image identification system, identification device generation device, identification device, image identification system and identification system
KR102046134B1 (en) * 2019-04-02 2019-11-18 주식회사 루닛 Neural network training method for utilizing differences between a plurality of images, and method thereof
KR20200017261A (en) * 2018-08-08 2020-02-18 주식회사 딥바이오 System for biomedical image diagnosis, method for biomedical image diagnosis and terminal performing the same
KR20210032951A (en) * 2018-06-13 2021-03-25 코스모 아티피셜 인텔리전스 - 에이아이 리미티드 Systems and methods for training hostile generation networks, and the use of trained hostile generation networks.

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101848321B1 (en) 2017-10-27 2018-04-20 주식회사 뷰노 Method for facilitating dignosis of subject based on fovea image thereof and apparatus using the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019159958A (en) * 2018-03-14 2019-09-19 オムロン株式会社 Face image identification system, identification device generation device, identification device, image identification system and identification system
KR20210032951A (en) * 2018-06-13 2021-03-25 코스모 아티피셜 인텔리전스 - 에이아이 리미티드 Systems and methods for training hostile generation networks, and the use of trained hostile generation networks.
KR101957811B1 (en) * 2018-08-07 2019-03-13 주식회사 뷰노 Method for computing severity with respect to dementia of subject based on medical image and apparatus using the same
KR20200017261A (en) * 2018-08-08 2020-02-18 주식회사 딥바이오 System for biomedical image diagnosis, method for biomedical image diagnosis and terminal performing the same
KR102046134B1 (en) * 2019-04-02 2019-11-18 주식회사 루닛 Neural network training method for utilizing differences between a plurality of images, and method thereof

Also Published As

Publication number Publication date
KR20220161672A (en) 2022-12-07

Similar Documents

Publication Publication Date Title
WO2018074739A1 (en) Stroke diagnosis and prognosis prediction method and system
WO2019103440A1 (en) Method for supporting reading of medical image of subject and device using same
US9524304B2 (en) Systems and methods for diagnosing inherited retinal diseases
WO2019083129A1 (en) Method for supporting reading of fundus image of subject, and device using same
JP2022527571A (en) Methods and systems for identifying subjects who may be affected by the medical condition
US11730364B2 (en) Apparatus and methods for supporting reading of fundus image
Le et al. Machine learning in optical coherence tomography angiography
WO2022231329A1 (en) Method and device for displaying bio-image tissue
KR102261408B1 (en) The method of providing disease information using medical image
WO2023095989A1 (en) Method and device for analyzing multimodality medical images for cerebral disease diagnosis
Olaniyan et al. Computational intelligence in iot healthcare
Wehbe et al. Deep learning for cardiovascular imaging: A review
WO2022255674A1 (en) Exponible biometric image reading support device and method
CN113837988A (en) Vascular disease prediction support method, vascular disease prediction support device, and system including same
US20220301165A1 (en) Method and apparatus for extracting physiologic information from biometric image
WO2022158727A1 (en) Device and method for supporting biometric image finding/diagnosis
Radvansky et al. Identification of the Occurrence of Poor Blood Circulation in Toes by Processing Thermal Images from Flir Lepton Module
JP7297334B2 (en) REAL-TIME BODY IMAGE RECOGNITION METHOD AND APPARATUS
WO2022103140A1 (en) Medical image interpretation device and medical image interpretation method
WO2022119325A1 (en) Sleep apnea diagnostic auxiliary system using simple skull x-ray image and method for providing diagnostic auxiliary information using same
Agarwal et al. Artificial Intelligence for Iris-Based Diagnosis in Healthcare
WO2023190391A1 (en) Disease correspondence determination program and disease correspondence determination device
KR102622932B1 (en) Appartus and method for automated analysis of lower extremity x-ray using deep learning
WO2024072067A1 (en) Method for predicting future vision field of glaucoma patient by using multimodal deep learning model
WO2024053996A1 (en) Method for providing information regarding wart or corn prediction and apparatus using same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22816333

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE