WO2022145841A1 - Procédé d'interprétation de lésion et appareil associé - Google Patents

Procédé d'interprétation de lésion et appareil associé Download PDF

Info

Publication number
WO2022145841A1
WO2022145841A1 PCT/KR2021/019376 KR2021019376W WO2022145841A1 WO 2022145841 A1 WO2022145841 A1 WO 2022145841A1 KR 2021019376 W KR2021019376 W KR 2021019376W WO 2022145841 A1 WO2022145841 A1 WO 2022145841A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lesion
dimensional
image information
region
Prior art date
Application number
PCT/KR2021/019376
Other languages
English (en)
Korean (ko)
Inventor
김한석
유영성
피재우
이두형
기리시스리니바산
가우담난다쿠마르
아킬라페루말라
Original Assignee
주식회사 피노맥스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 피노맥스 filed Critical 주식회사 피노맥스
Publication of WO2022145841A1 publication Critical patent/WO2022145841A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the technical idea of the present disclosure relates to a lesion reading method and an apparatus therefor.
  • CT computed tomography
  • a lesion reading method and an apparatus for the same provide a lesion reading method capable of automatically and quickly reading a lesion from a CT image and more intuitively displaying the presence or absence of a lesion from a CT image, and an apparatus therefor is to provide
  • a lesion reading method includes: acquiring an image group including a plurality of two-dimensional images generated corresponding to a continuous volume of a chest; generating first image information obtained by extracting a lung region from each of the two-dimensional images by inputting the image group into a learned first network function; generating second image information obtained by detecting a predetermined lesion region from each of the two-dimensional images by inputting the image group into a learned second network function; and generating a 3D image including the lesion region based on the first image information and the second image information.
  • the generating of the first image information includes dividing the lung region extracted from the 2D image into a plurality of lobe regions through the first network function. And, the first image information may include information about the divided plurality of lobe regions.
  • the three-dimensional image includes a three-dimensional lobe region corresponding to the plurality of segmented lobe regions
  • the method includes calculating a ratio of a lesion region with respect to each of the three-dimensional lobe regions It may include further steps.
  • generating a plurality of second images respectively corresponding to the plurality of 2D images based on at least one of the first image information and the second image information - Extraction to the second image information about at least one of the detected lung region and the detected lesion region is included; searching for the second image corresponding to a predetermined reference coordinate among vertical coordinates of the 3D image; and matching the second image searched for based on the reference coordinates with the 3D image and displaying the matched image.
  • receiving a user input for setting the reference coordinates and when the reference coordinates are changed according to the user input, updating the second image displayed by matching the 3D image to correspond to the changed reference coordinates.
  • a lesion reading apparatus includes: a memory for storing a program for lesion reading; and by executing the program, an image group including a plurality of two-dimensional images generated corresponding to the continuous volume of the chest is obtained, and the image group is input to a learned first network function to each of the two-dimensional images Generates first image information obtained by extracting a lung region from and at least one processor for generating a 3D image including the lesion region based on the first image information and the second image information.
  • the processor divides the lung region extracted from the 2D image into a plurality of lobe regions through the first network function, and the first image information is divided into It may include information about the plurality of lobe regions.
  • the three-dimensional image includes a three-dimensional lobe region corresponding to the plurality of divided lobe regions, and the processor calculates a ratio of a lesion region with respect to each of the three-dimensional lobe regions.
  • the processor generates a plurality of second images respectively corresponding to the plurality of 2D images based on at least one of the first image information and the second image information, and the 3D image
  • the second image corresponding to a predetermined reference coordinate among the vertical coordinates of the image is searched for, the second image searched based on the reference coordinate is matched with the 3D image and displayed, and the generated second image is extracted
  • Information on at least one of the detected lung region and the detected lesion region may be included.
  • the processor when the reference coordinates are changed according to the user input, the display is matched with the 3D image
  • the second image may be updated to correspond to the changed reference coordinates.
  • a lung area and a lesion area are quickly and accurately detected from a chest CT image using a neural network, and more intuitively with a three-dimensional image A lesion reading result can be provided.
  • the lesion reading device according to the technical spirit of the present disclosure and the effects obtainable by the device for the same are not limited to the above-mentioned effects, and other effects not mentioned are common in the art to which the present disclosure belongs from the description below. It can be clearly understood by those with knowledge.
  • FIG. 1 is a flowchart illustrating a lesion reading method according to an embodiment of the present disclosure.
  • FIG. 2 is a flowchart illustrating a lesion reading method according to an embodiment of the present disclosure.
  • FIG. 3 exemplarily illustrates an operation of a network function in a lesion reading method according to an embodiment of the present disclosure.
  • 5 and 6 exemplarily show a 3D image generated through the lesion reading method according to an embodiment of the present disclosure and a 2D image displayed by matching the 3D image.
  • FIG. 7 is a block diagram schematically illustrating a configuration of a lesion reading apparatus according to an embodiment of the present disclosure.
  • a component when referred to as “connected” or “connected” with another component, the one component may be directly connected or directly connected to the other component, but in particular It should be understood that, unless there is a description to the contrary, it may be connected or connected through another element in the middle.
  • ⁇ unit means a unit that processes at least one function or operation, which is a processor, a micro Processor (Micro Processor), Micro Controller (Micro Controller), CPU (Central Processing Unit), GPU (Graphics Processing Unit), APU (Accelerate Processor Unit), DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit), FPGA (Field Programmable Gate Array) may be implemented as hardware or software, or a combination of hardware and software.
  • a micro Processor Micro Processor
  • Micro Controller Micro Controller
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • APU Accelerate Processor Unit
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • each constituent unit in the present disclosure is merely a division for each main function that each constituent unit is responsible for. That is, two or more components to be described below may be combined into one component, or one component may be divided into two or more for each more subdivided function.
  • each of the constituent units to be described below may additionally perform some or all of the functions of other constituent units in addition to the main function it is responsible for. Of course, it can also be performed by being dedicated to it.
  • a network function may be used synonymously with a neural network and/or a neural network.
  • a neural network may be generally composed of a set of interconnected computational units that may be referred to as nodes, and these nodes may be referred to as neurons.
  • a neural network is generally configured to include a plurality of nodes. Nodes constituting the neural network may be interconnected by one or more links.
  • Some of the nodes constituting the neural network may configure one layer based on distances from the initial input node. For example, a set of nodes having a distance of n from the initial input node may constitute n layers.
  • the neural network described herein may include a deep neural network (DNN) including a plurality of hidden layers in addition to an input layer and an output layer.
  • DNN deep neural network
  • FIG. 1 is a flowchart illustrating a lesion reading method according to an embodiment of the present disclosure.
  • the lesion reading method 100 may be performed in a personal computer, a work station, a server computer device, etc., or may be performed in a separate device for the same.
  • the lesion reading method 100 may be performed by one or more computing devices.
  • at least one or more steps of the lesion reading method 100 according to an embodiment of the present disclosure may be performed in a client device, and other steps may be performed in a server device.
  • the client device and the server device may be connected to each other through a network to transmit/receive an operation result.
  • the lesion reading method 100 may be performed by distributed computing technology.
  • the lesion reading apparatus may acquire an image group including a plurality of 2D images generated in response to a continuous volume of the chest.
  • the image group may be a computed tomography (CT) image of the subject's chest. That is, the plurality of 2D images constituting the image group may be a plurality of slices obtained by continuously photographing a cross-section of the chest including the lungs in one direction through the computed tomography method. By stacking these two-dimensional images, it is possible to obtain three-dimensional image information about the chest.
  • CT computed tomography
  • the lesion reading apparatus inputs the image group to the learned first network function to extract a lung region from each 2D image, and first image information obtained by dividing the lung region into a plurality of lobe regions can create
  • the first network function may be one in which learning of lung region extraction and lobe region segmentation has been performed in advance through learning data (eg, a chest CT image on which lung region extraction and lobe region segmentation have been performed by an expert, etc.) .
  • the first image information may be another 2D image obtained by masking the lung region and/or lobe region in a plurality of 2D images, or may be data including coordinate information for the lung region and/or lobe region.
  • the lesion reading apparatus may generate second image information obtained by detecting a predetermined lesion region from each 2D image by inputting the image group to the learned second network function.
  • the second network function may be one in which learning for lesion region detection has been performed in advance through learning data (eg, a chest CT image on which lesion region detection is performed by an expert, etc.).
  • the second image information may be another two-dimensional image obtained by masking a lesion region on a plurality of two-dimensional images, or data including coordinate information on the lesion region.
  • the lesion region may include a region of pneumonia caused by a viral infection.
  • the lesion reading apparatus may generate and/or display a 3D image including the lesion region based on the first image information and the second image information.
  • the 3D image may be displayed by superimposing a 3D lesion area generated based on the second image on a 3D lung image generated based on the first image information.
  • the 3D image may include information on a plurality of divided lobe regions.
  • a plurality of 3D lobe regions corresponding to the lobe regions divided in step S120 may be displayed in a 3D image using different colors.
  • the method 100 may further include calculating a ratio of the lesion area to each of the three-dimensional lobe areas. This ratio is based on which part of the lung lesions such as pneumonia based on a specific virus are concentrated, where the lesions are concentrated according to the severity, where the lesions are concentrated according to the patient's physical characteristics, etc. It can be used as statistical data for judging or estimating
  • FIG. 2 is a flowchart illustrating a lesion reading method according to an embodiment of the present disclosure.
  • the method 200 of FIG. 2 may further include steps S210 to S240 in addition to the method 100 of FIG. 1 .
  • the lesion reading apparatus may generate a plurality of second images corresponding to the plurality of 2D images based on at least one of the first image information and the second image information.
  • the second image may be an image in which a lung region and/or a lobe region is displayed on each of the 2D images, or an image in which a lesion region is displayed.
  • the second image may be an image in which both a lung region (and/or a lobe region) and a lesion region are displayed with respect to each of the two-dimensional images.
  • the lesion reading apparatus may receive a user input for setting reference coordinates.
  • the reference coordinate serves as a reference for selecting a two-dimensional second image to be displayed by matching with the three-dimensional image, and may be a vertical coordinate value in a three-dimensional coordinate space to which the generated three-dimensional image is mapped.
  • the user input may be an input such as clicking or scrolling an area of a 3D image displayed on the display device.
  • the lesion reading apparatus may be set to match and display the second image corresponding to the reference coordinate set as a default value with the 3D image before a user input for the reference coordinate is received.
  • the lesion reading apparatus may search for a second image corresponding to the reference coordinates.
  • the 3D image may be generated in a predetermined 3D coordinate space by vertically stacking first image information and second image information obtained from 2D images included in an input image group, 3
  • the lesion reading apparatus may search for a second image corresponding to the corresponding position from among the plurality of second images.
  • the lesion reading apparatus may display the searched second image by matching it with the 3D image based on the reference coordinates.
  • the lesion reading apparatus may arrange the retrieved second image to be perpendicular to the vertical coordinate axis of the three-dimensional space, and display the lung region of the second image by matching the horizontal section of the three-dimensional image (lung image). .
  • the lesion reading apparatus may update and display the second image. That is, the lesion reading apparatus may search for a new second image corresponding to the changed reference coordinates, match it with the 3D image, and display it again.
  • FIG. 3 exemplarily illustrates an operation of a network function in a lesion reading method according to an embodiment of the present disclosure.
  • a chest CT image 10 including a plurality of two-dimensional tomographic images may be input to the learned first network function 310 and the second network function 320 , respectively.
  • the first network function 310 extracts a lung region from each of a plurality of input two-dimensional tomography images, or in addition, divides the lung region into a plurality of lobe regions, and divides the extracted and/or separated regions into 2
  • the first image information 20 may be generated by masking each dimensional tomography image.
  • the second network function 320 detects a lesion area corresponding to pneumonia from each of the plurality of input 2D tomography images, and masks the detected lesion area to the 2D tomography image, so that the second image information 30 . can create
  • the lesion reading apparatus may generate a three-dimensional lung image based on the first image information and the second image information generated by the first network function and the second network function, respectively.
  • the three-dimensional lung image may include five three-dimensional lobe regions 410 and a three-dimensional lesion region 420 distributed in each of the corresponding regions, as shown in (a) of FIG. 4 . have.
  • the 3D lung image may include the 3D lesion area 420 distributed over the entire lung area without distinction of the lobe area.
  • 5 and 6 exemplarily show a 3D image generated through the lesion reading method according to an embodiment of the present disclosure and a 2D image displayed by matching the 3D image.
  • the lesion reading apparatus generates a plurality of second images corresponding to each two-dimensional tomography image based on the first image information and/or the second image information, and converts one of them into a three-dimensional It can be displayed by matching with the lung image.
  • each second image may include information on a lung region, a lobe region, and/or a lesion region.
  • a second image corresponding to a reference coordinate input by the user or set as a default value is selected, and the second image is arranged in a horizontal direction so that the three-dimensional lung image is perpendicular to the vertical coordinate axis, but The lung region may be displayed so that the 3D lung image is matched with the horizontal section.
  • the second image may be updated and displayed.
  • a second image corresponding thereto is sequentially detected while the reference coordinates increase, and the detected second image is displayed in a vertical direction. It can be displayed while moving upwards while matching the 3D lung image.
  • FIG. 7 is a block diagram schematically illustrating a configuration of a lesion reading apparatus according to an embodiment of the present disclosure.
  • the communication unit 710 may receive input data (such as a chest CT image) for reading the lesion.
  • the communication unit 710 may include a wired/wireless communication unit.
  • the communication unit 710 is a local area network (LAN), a wide area network (WAN), a value added network (VAN), a mobile communication network ( It may include one or more components that allow communication through a mobile radio communication network), a satellite communication network, and a combination thereof.
  • the communication unit 710 includes a wireless communication unit
  • the communication unit 710 wirelessly transmits and receives data or signals using cellular communication or a wireless LAN (eg, Wi-Fi).
  • the communication unit may transmit/receive data or signals to and from an external device or an external server under the control of the processor 740 .
  • the input unit 720 may receive various user commands through external manipulation.
  • the input unit 720 may include or connect one or more input devices.
  • the input unit 720 may be connected to an interface for various inputs, such as a keypad and a mouse, to receive a user command.
  • the input unit 720 may include an interface such as a Thunderbolt as well as a USB port.
  • the input unit 720 may include or combine various input devices such as a touch screen and a button to receive an external user command.
  • the memory 730 may store a program for the operation of the processor 740 and may temporarily or permanently store input/output data.
  • the memory 730 is a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (eg, SD or XD memory, etc.), a RAM (RAM).
  • SRAM, ROM, EEPROM, PROM, magnetic memory, a magnetic disk, and an optical disk may include at least one type of storage medium.
  • the memory 730 may store various network functions and algorithms, and may store various data, programs (one or more instructions), applications, software, commands, codes, etc. for driving and controlling the device 700 . have.
  • the processor 740 may control the overall operation of the device 700 .
  • the processor 740 may execute one or more programs stored in the memory 730 .
  • the processor 740 may mean a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to the technical idea of the present disclosure are performed.
  • the processor 740 obtains an image group including a plurality of two-dimensional images generated corresponding to a continuous volume of the chest, and inputs the image group to a learned first network function to each A first image information obtained by extracting a lung region from a two-dimensional image is generated, and the image group is input to a learned second network function to generate second image information obtained by detecting a predetermined lesion region from each of the two-dimensional images, , a 3D image including the lesion region may be generated based on the first image information and the second image information.
  • the lesion reading method may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination.
  • the program instructions recorded on the medium may be specially designed and configured for the present disclosure, or may be known and available to those skilled in the art of computer software.
  • Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks.
  • - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • the lesion reading method according to the disclosed embodiments may be provided as included in a computer program product.
  • Computer program products may be traded between sellers and buyers as commodities.
  • the computer program product may include a S/W program and a computer-readable storage medium in which the S/W program is stored.
  • computer program products may include products (eg, downloadable apps) in the form of S/W programs distributed electronically through manufacturers of electronic devices or electronic markets (eg, Google Play Store, App Store). have.
  • the storage medium may be a server of a manufacturer, a server of an electronic market, or a storage medium of a relay server temporarily storing a SW program.
  • the computer program product in a system consisting of a server and a client device, may include a storage medium of the server or a storage medium of the client device.
  • a third device eg, a smart phone
  • the computer program product may include a storage medium of the third device.
  • the computer program product may include the S/W program itself transmitted from the server to the client device or a third device, or transmitted from the third device to the client device.
  • one of the server, the client device and the third device may execute the computer program product to perform the method according to the disclosed embodiments.
  • two or more of a server, a client device, and a third device may execute a computer program product to distribute the method according to the disclosed embodiments.
  • a server eg, a cloud server or an artificial intelligence server

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • General Physics & Mathematics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Veterinary Medicine (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Computer Graphics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pulmonology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

La présente divulgation concerne un procédé d'interprétation d'une lésion et un appareil associé. Selon un mode de réalisation, la présente divulgation porte sur un procédé d'interprétation d'une lésion qui peut comprendre les étapes consistant : à acquérir un groupe d'images comprenant une pluralité d'images bidimensionnelles générées correspondant à un volume continu de la poitrine ; à générer des premières informations d'image obtenues par extraction d'une zone pulmonaire à partir de chacune des images bidimensionnelles par entrée du groupe d'images dans une première fonction de réseau entraînée ; à générer des secondes informations d'image obtenues par détection d'une zone de lésion prédéterminée à partir de chacune des images bidimensionnelles par entrée du groupe d'images dans une seconde fonction de réseau entraînée ; et à générer une image tridimensionnelle comprenant la zone de lésion sur la base des premières informations d'image et des secondes informations d'image.
PCT/KR2021/019376 2020-12-30 2021-12-20 Procédé d'interprétation de lésion et appareil associé WO2022145841A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0187884 2020-12-30
KR1020200187884A KR102367984B1 (ko) 2020-12-30 2020-12-30 병변 판독 방법 및 이를 위한 장치

Publications (1)

Publication Number Publication Date
WO2022145841A1 true WO2022145841A1 (fr) 2022-07-07

Family

ID=80818639

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/019376 WO2022145841A1 (fr) 2020-12-30 2021-12-20 Procédé d'interprétation de lésion et appareil associé

Country Status (2)

Country Link
KR (2) KR102367984B1 (fr)
WO (1) WO2022145841A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102501816B1 (ko) * 2022-05-23 2023-02-22 주식회사 피맥스 환자의 개인화 지표에 기초하는 인공지능을 이용한 폐기관 자동 분석 방법 및 기록매체
KR102501815B1 (ko) * 2022-05-23 2023-02-22 주식회사 피맥스 인공지능을 이용한 폐기관 자동 분석 방법 및 장치

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010063514A (ja) * 2008-09-09 2010-03-25 Konica Minolta Medical & Graphic Inc 医用画像診断支援装置、医用画像診断支援方法及びプログラム
JP2015535434A (ja) * 2013-02-13 2015-12-14 三菱電機株式会社 胸部4dctをシミュレートする方法
KR102152385B1 (ko) * 2019-08-08 2020-09-04 주식회사 딥노이드 특이점 진단 장치 및 진단 방법

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101767069B1 (ko) * 2015-12-03 2017-08-11 서울여자대학교 산학협력단 치료계획용 4d mdct 영상과 치료시 획득한 4d cbct 영상 간 영상 정합 및 종양 매칭을 이용한 방사선 치료시 종양 움직임 추적 방법 및 장치
KR102132566B1 (ko) * 2019-10-24 2020-07-10 주식회사 딥노이드 병변 판독 장치 및 방법

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010063514A (ja) * 2008-09-09 2010-03-25 Konica Minolta Medical & Graphic Inc 医用画像診断支援装置、医用画像診断支援方法及びプログラム
JP2015535434A (ja) * 2013-02-13 2015-12-14 三菱電機株式会社 胸部4dctをシミュレートする方法
KR102152385B1 (ko) * 2019-08-08 2020-09-04 주식회사 딥노이드 특이점 진단 장치 및 진단 방법

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHRISTE ANDREAS, PETERS ALAN A., DRAKOPOULOS DIONYSIOS, HEVERHAGEN JOHANNES T., GEISER THOMAS, STATHOPOULOU THOMAI, CHRISTODOULIDI: "Computer-Aided Diagnosis of Pulmonary Fibrosis Using Deep Learning and CT Images : ", INVESTIGATIVE RADIOLOGY, LIPPINCOTT WILLIAMS & WILKINS, US, vol. 54, no. 10, 1 October 2019 (2019-10-01), US , pages 627 - 632, XP055948386, ISSN: 0020-9996, DOI: 10.1097/RLI.0000000000000574 *
TANG HAO; ZHANG CHUPENG; XIE XIAOHUI: "Automatic Pulmonary Lobe Segmentation Using Deep Learning", 2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), IEEE, 8 April 2019 (2019-04-08), pages 1225 - 1228, XP033576598, DOI: 10.1109/ISBI.2019.8759468 *

Also Published As

Publication number Publication date
KR20220097859A (ko) 2022-07-08
KR102367984B1 (ko) 2022-03-03

Similar Documents

Publication Publication Date Title
WO2022145841A1 (fr) Procédé d'interprétation de lésion et appareil associé
WO2017022908A1 (fr) Procédé et programme de calcul de l'âge osseux au moyen de réseaux neuronaux profonds
WO2019103440A1 (fr) Procédé permettant de prendre en charge la lecture d'une image médicale d'un sujet et dispositif utilisant ce dernier
WO2020062493A1 (fr) Procédé et appareil de traitement d'image
WO2021034138A1 (fr) Procédé d'évaluation de la démence et appareil utilisant un tel procédé
WO2021137454A1 (fr) Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur
WO2021071288A1 (fr) Procédé et dispositif de formation de modèle de diagnostic de fracture
WO2019098415A1 (fr) Procédé permettant de déterminer si un sujet a développé un cancer du col de l'utérus, et dispositif utilisant ledit procédé
WO2022131642A1 (fr) Appareil et procédé pour déterminer la gravité d'une maladie sur la base d'images médicales
WO2023095989A1 (fr) Procédé et dispositif d'analyse d'images médicales à modalités multiples pour le diagnostic d'une maladie cérébrale
WO2019143021A1 (fr) Procédé de prise en charge de visualisation d'images et appareil l'utilisant
WO2020111557A1 (fr) Dispositif et procédé d'élaboration de carte de vaisseau sanguin et programme informatique d'exécution dudit procédé
CN113257412B (zh) 信息处理方法、装置、计算机设备及存储介质
WO2023282389A1 (fr) Procédé de calcul de masse grasse utilisant une image de tête et de cou et dispositif associé
WO2023013959A1 (fr) Appareil et procédé de prédiction de l'accumulation de bêta-amyloïdes
WO2023113285A1 (fr) Procédé de gestion d'images de corps et appareil l'utilisant
KR20220143187A (ko) 딥러닝을 이용한 폐기종 자동 추출 방법 및 장치
WO2021177771A1 (fr) Procédé et système pour prédire l'expression d'un biomarqueur à partir d'une image médicale
WO2022177069A1 (fr) Procédé d'étiquetage et dispositif informatique associé
WO2023058837A1 (fr) Procédé de détection de diaphragme à partir d'image de poitrine, et appareil associé
WO2023282388A1 (fr) Procédé et appareil pour fournir des informations nécessaires pour diagnostiquer une métastase de ganglion lymphatique du cancer de la thyroïde
WO2023163287A1 (fr) Procédé et appareil d'analyse d'image médicale
WO2019168280A1 (fr) Procédé et dispositif permettant de déchiffrer une lésion à partir d'une image d'endoscope à capsule à l'aide d'un réseau neuronal
WO2024043531A1 (fr) Procédé d'entraînement et appareil d'entraînement de modèle à des fins de détermination de masse de cavité nasale, et procédé et appareil de détermination de masse de cavité nasale
KR20220142570A (ko) 폐의 자동 추출 및 폐 영역 분리 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21915639

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/10/2023)