WO2019164277A1 - Procédé et dispositif d'évaluation de saignement par utilisation d'une image chirurgicale - Google Patents

Procédé et dispositif d'évaluation de saignement par utilisation d'une image chirurgicale Download PDF

Info

Publication number
WO2019164277A1
WO2019164277A1 PCT/KR2019/002095 KR2019002095W WO2019164277A1 WO 2019164277 A1 WO2019164277 A1 WO 2019164277A1 KR 2019002095 W KR2019002095 W KR 2019002095W WO 2019164277 A1 WO2019164277 A1 WO 2019164277A1
Authority
WO
WIPO (PCT)
Prior art keywords
bleeding
surgical image
computer
amount
surgical
Prior art date
Application number
PCT/KR2019/002095
Other languages
English (en)
Korean (ko)
Inventor
이종혁
형우진
양훈모
김호승
Original Assignee
(주)휴톰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020180129709A external-priority patent/KR102014364B1/ko
Application filed by (주)휴톰 filed Critical (주)휴톰
Publication of WO2019164277A1 publication Critical patent/WO2019164277A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image

Definitions

  • the present invention relates to a method and apparatus for bleeding evaluation using surgical images.
  • Deep learning is defined as a set of machine learning algorithms that attempts to achieve high levels of abstraction (summarizing key content or functions in large amounts of data or complex data) through a combination of several nonlinear transformations. Deep learning can be seen as a field of machine learning that teaches computers how people think in a large framework.
  • the problem to be solved by the present invention is to provide a method and apparatus for bleeding evaluation using a surgical image.
  • the problem to be solved by the present invention is to provide a method and apparatus for recognizing whether bleeding occurred in the surgical image to measure the bleeding area and the amount of bleeding.
  • the bleeding evaluation method using a surgical image performed by a computer obtaining a surgical image, recognizing whether there is a bleeding region in the surgical image based on deep learning-based learning And estimating the location of the bleeding area from the surgical image based on the recognition result.
  • the step of recognizing whether the bleeding region is present based on deep learning based learning using a convolutional neural network (CNN), extracting feature information from the surgical image (feature) And recognizing whether there is a bleeding region in the surgical image based on the feature information.
  • CNN convolutional neural network
  • estimating the location of the bleeding region may specify a bleeding region in the surgical image based on the feature information.
  • estimating the location of the bleeding region includes converting a pixel in the surgical image into a specific value based on the feature information, and based on a specific value of the pixel in the surgical image. And specifying the bleeding area.
  • the method may further include calculating the amount of bleeding in the bleeding region based on the position of the bleeding region.
  • the calculating of the bleeding amount may be calculated using the pixel information of the bleeding area.
  • the calculating of the bleeding amount may include obtaining depth information of the bleeding area based on a depth map of the surgical image, and based on depth information of the bleeding area. And calculating the bleeding amount by estimating a volume corresponding to the bleeding region.
  • the calculating of the amount of bleeding may include calculating the amount of bleeding by further using the gauze information when the gauze is included in the surgical image.
  • the step of calculating the bleeding amount, corresponding to the surgical image can be calculated in the bleeding area in the bleeding area compared to the surgical image that does not exist.
  • An apparatus includes a memory for storing one or more instructions, and a processor for executing the one or more instructions stored in the memory, wherein the processor executes the one or more instructions to perform a surgical image. Acquiring, recognizing whether there is a bleeding region in the surgical image based on deep learning-based learning, and estimating a position of the bleeding region from the surgical image based on the recognition result. .
  • a computer program according to an embodiment of the present invention is combined with a computer, which is hardware, and stored in a computer-readable recording medium to perform a bleeding evaluation method using the surgical image.
  • a bleeding learning model specialized for bleeding can be provided by estimating the bleeding region in the surgical image and calculating the amount of bleeding.
  • the bleeding area and the amount of bleeding can be grasped from the surgical image, it is possible to estimate the total bleeding occurred in a specific surgery based on this, and further provide a criterion for performing the evaluation of the surgery through the bleeding degree. .
  • the present invention according to an embodiment of the present invention, it is possible to automatically derive the presence or absence of bleeding and the degree of bleeding from the surgical image without the intervention of medical staff.
  • FIG. 1 is a flowchart illustrating a bleeding evaluation method using a surgical image according to an embodiment of the present invention.
  • FIG. 2 is a view showing an example that can be applied to the bleeding evaluation method using a surgical image according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an example of calculating a bleeding amount by segmenting a bleeding region in a surgical image according to an exemplary embodiment of the present invention.
  • FIG. 4 is a diagram schematically showing the configuration of an apparatus 100 for performing a bleeding evaluation method using a surgical image according to an embodiment of the present invention.
  • a “part” or “module” refers to a hardware component such as software, FPGA, or ASIC, and the “part” or “module” plays certain roles. However, “part” or “module” is not meant to be limited to software or hardware.
  • the “unit” or “module” may be configured to be in an addressable storage medium or may be configured to play one or more processors.
  • a “part” or “module” may include components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, Procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Functions provided within components and “parts” or “modules” may be combined into smaller numbers of components and “parts” or “modules” or into additional components and “parts” or “modules”. Can be further separated.
  • a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
  • a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may correspond to a server that receives a request from a client and performs information processing.
  • FIG. 1 is a flowchart illustrating a bleeding evaluation method using a surgical image according to an embodiment of the present invention.
  • the method of FIG. 1 is described as being performed by a computer for convenience of description, the subject of each step is not limited to a specific device but may be used to encompass a device capable of performing computing processing. That is, in the present embodiment, the computer may mean an apparatus capable of performing a bleeding evaluation method using a surgical image according to an embodiment of the present invention.
  • the bleeding evaluation method using a surgical image the step of obtaining a surgical image (S100), whether there is a bleeding region in the surgical image based on deep learning-based learning Recognizing (S200), estimating the location of the bleeding area from the surgical image based on the recognition result (S300), and calculating the amount of bleeding in the bleeding area based on the location of the bleeding area (S400) can do.
  • S100 surgical image
  • S200 deep learning-based learning Recognizing
  • S300 estimating the location of the bleeding area from the surgical image based on the recognition result
  • S400 the amount of bleeding in the bleeding area based on the location of the bleeding area
  • the computer may acquire a surgical image (S100).
  • the surgical image may be an actual surgical image or may be a virtual image for simulation.
  • the actual surgical image refers to data obtained by the medical staff performing the actual operation, for example, the actual image taken by the camera inserted into the patient's body during minimally invasive surgery such as surgical robot, laparoscopic, endoscope, etc. It may be an image including a surgical scene. In other words, the actual surgical image is data recorded on the surgical site and the operation during the actual surgical procedure.
  • the virtual image for the simulation refers to a simulation image generated based on a medical image taken from a medical imaging apparatus such as CT, MRI, PET, etc., for example, is generated by modeling a medical image of a real patient in three dimensions It can be a simulated model.
  • a virtual surgical image may be generated by rehearsing or simulating the simulation model in the virtual space. Therefore, the virtual image may be data recorded about the surgical site and the operation during the surgery performed on the simulation model.
  • the computer may recognize whether there is a bleeding region in the surgical image acquired in step S100 based on the deep learning-based learning (S200).
  • the computer generates a learning model previously learned to recognize whether the surgical image acquired in step S100 is a bleeding image including a bleeding region (that is, recognize whether a bleeding region exists in the surgical image). It is available.
  • the computer may first learn to recognize the presence or absence of a bleeding region in the surgical image by using deep learning based on the surgical image data set.
  • the surgical image data set may be a training data set for labeling the surgical image through various learning methods.
  • the surgical image dataset may be data learned using machine learning methods such as supervised learning, unsupervised learning, and reinforcement learning.
  • the computer acquires a surgical image dataset as training data and uses the same to perform a learning to recognize the presence or absence of a bleeding portion (that is, a bleeding region) in the surgical image, thereby learning a model (eg, a bleeding recognition model).
  • the computer may construct and store the learning model learned based on the surgical image data set in advance, or use the learning model constructed in another device.
  • the computer acquires a new surgical image (that is, the surgical image in step S100)
  • the bleeding region in the new surgical image is obtained using a learning model (eg, a bleeding awareness model) trained based on the surgical image dataset.
  • a learning model eg, a bleeding awareness model
  • the computer may estimate the location of the bleeding region from the surgical image based on the surgical image recognition result (that is, whether the surgical image is a bleeding image or a non-bleeding image) in step S200 (S300).
  • the computer when the computer recognizes that the bleeding region exists in the surgical image based on deep learning-based learning, the computer may specify the bleeding region in the surgical image and estimate the location of the specified bleeding region. have.
  • the computer may extract feature information from the surgical image through deep learning based learning, and recognize and specify a bleeding region in the surgical image based on the extracted feature information.
  • the feature information is information representing a feature of the bleeding, and texture information such as color, shape, and texture for identifying the bleeding may be used. A detailed process thereof will be described later with reference to FIG. 2.
  • the computer may calculate the amount of bleeding in the bleeding region based on the location of the bleeding region estimated in step S300 (S400).
  • the computer may calculate the amount of bleeding using pixel information of the bleeding region in the surgical image.
  • the computer acquires depth information of the bleeding region in the surgical image based on the depth map of the surgical image, and estimates the volume corresponding to the bleeding region based on the depth of the bleeding amount. Can be calculated.
  • the computer may calculate the amount of bleeding in the bleeding area by using the gauze information.
  • FIG. 2 is a view showing an example that can be applied to the bleeding evaluation method using a surgical image according to an embodiment of the present invention. That is, FIG. 2 is a specific embodiment applicable to the method of FIG. 1, and a detailed description of the process overlapping with that of FIG. 1 will be omitted.
  • the computer may acquire a surgical image (S100).
  • the computer may acquire at least one surgical image taken during a particular surgery in order to evaluate the bleeding occurred during a particular surgery (eg, determine whether the bleeding occurred, the degree of bleeding, etc.).
  • the computer may repeatedly perform each step to be described below with respect to each of the at least one surgical image photographed during a specific surgery. Therefore, since the computer can identify the bleeding area and the amount of bleeding from each of the surgical images taken at the time of a specific surgery, the computer can finally evaluate the degree of bleeding during the specific surgery.
  • the computer may recognize whether a bleeding region exists in the surgical image based on deep learning based learning (S200).
  • the computer may perform deep learning based learning using a CNN (Convolutional neural network) by inputting a surgical image to a learning model (eg, a bleeding / recognition model) previously trained based on the surgical image data set. (S210).
  • the CNN may recognize whether bleeding in the surgical image occurs using at least one layer.
  • the computer may extract feature information from the surgical image through learning using the CNN (S220).
  • the computer may obtain a feature map by extracting feature information from the surgical image through at least one layer of the CNN.
  • the computer may generate a feature map by extracting information representing a bleeding feature while learning the surgical image through at least one layer of the CNN.
  • the bleeding feature may be extracted using texture information such as color, texture, and material in the surgical image.
  • the computer may recognize whether a bleeding region exists in the surgical image based on the feature information (S230).
  • the computer recognizes whether a bleeding region exists based on a feature map including feature information through a classifier, and according to a recognition result, the computer recognizes whether the bleeding region exists. Can be classified as a non-bleeding image.
  • the computer may estimate the location of the bleeding region from the surgical image based on the recognition result through the learning of the surgical image (S300).
  • the computer may specify the bleeding region in the surgical image based on the feature map including the feature information. That is, since the computer can grasp the part indicating the bleeding feature from the feature map, the computer can specify the bleeding area in the surgical image.
  • the computer may specify the bleeding region from the surgical image based on the degree of influence on the characteristic of the bleeding region from the result (eg, feature information) derived using at least one layer of the CNN.
  • the computer may convert each pixel in the surgical image into a specific value based on the feature map including the characteristic information, and specify a bleeding region based on the specific value of each pixel in the surgical image.
  • Each pixel in the surgical image may be converted into a specific value based on a predetermined weight depending on whether the region corresponds to a feature of the bleeding based on the feature map (ie, the degree of influence on the feature of the bleeding).
  • the computer uses Grad weight-weighted Class Activation Mapping (CAM) technology, which inversely estimates the learning result of recognizing the bleeding area in the surgical image through CNN, to convert each pixel of the surgical image to a specific value.
  • CAM Grad weight-weighted Class Activation Mapping
  • the computer assigns a high value (e.g., a high weight) to a pixel that is recognized as corresponding to a bleeding area in the surgical image based on the feature map, and a low value (e.g., to a pixel that is recognized as not corresponding to a bleeding area. Low weight) to convert each pixel value of the surgical image.
  • the computer may further highlight the bleeding region in the surgical image through the converted pixel value, and may segment the bleeding region to estimate the position of the region.
  • the computer may apply a Grad CAM technique to generate a heat map for each pixel in the surgical image based on the feature map and convert each pixel into a probability value.
  • the computer may specify the bleeding region in the surgical image based on the probability value of each converted pixel. For example, the computer may determine a pixel area having a high probability value as a bleeding area.
  • the computer may calculate the amount of bleeding based on the bleeding region in the surgical image (S400).
  • the computer may calculate the amount of bleeding using pixel information of the bleeding region in the surgical image. For example, the computer may calculate the amount of bleeding using the number of pixels corresponding to the bleeding region in the surgical image, color information (eg, RGB value) of the pixel, and the like.
  • the computer acquires depth information of the bleeding region in the surgical image based on a depth map of the surgical image, and calculates the depth information of the bleeding region based on the acquired depth information.
  • the amount of bleeding can be calculated by estimating the corresponding volume.
  • the surgical image is a stereoscopic image, it has a three-dimensional stereoscopic sense, that is, depth information, so that the volume of the bleeding region in the three-dimensional space can be grasped.
  • the computer may acquire pixel information (eg, the number of pixels, the position of the pixel, etc.) of the bleeding region in the surgical image, and calculate a depth value of the depth map corresponding to the acquired pixel information of the bleeding region. .
  • the computer can calculate the amount of bleeding by knowing the volume of the bleeding area based on the calculated depth value.
  • the computer may calculate the amount of bleeding in the bleeding area by using the gauze information.
  • the computer may reflect the amount of bleeding generated in the bleeding area by using the number of gauze in the surgical image and color information of the gauze (eg, RGB value).
  • the computer may calculate the amount of bleeding in the bleeding area based on a three-dimensional surgical simulation model generated in accordance with the patient's body. For example, the computer may obtain a region corresponding to the bleeding region from the 3D surgical simulation model, and calculate the bleeding amount by estimating pixel information or volume information (depth information) on the region in the obtained 3D surgical simulation model. .
  • the computer may segment the bleeding region in the surgical image, and calculate the amount of bleeding using the pixel information and the depth map of the segmented bleeding region.
  • the amount of bleeding is calculated first for the portion (segmented portion) estimated as the bleeding area, so that a more effective and accurate amount of bleeding can be calculated, and also a non-bleeding but bleeding-like pixel value (ie, characteristic information). In the case of having a bleeding amount can be reduced to calculate the error reflected. A detailed process thereof will be described with reference to FIG. 3.
  • the computer may acquire the surgical image 10 and estimate whether the bleeding region exists in the surgical image 10 based on deep learning based learning to estimate the bleeding region.
  • the computer may segment only the portion corresponding to the bleeding region from the surgical image by applying the Grad CAM technique as described above.
  • the computer may acquire a depth map 30 corresponding to the segmented bleeding area 20.
  • the computer generates pixel information (eg, the number of pixels, RGB color values of the pixel, etc.) of the bleeding region 20 based on the surgical image 10, the segmented bleeding region 20, and the depth map 30.
  • Volume information depth information
  • the bleeding amount can be calculated by concentrating on the segmented bleeding area, so that a more accurate bleeding amount can be obtained.
  • the computer may calculate the amount of bleeding by comparing with a bleeding surgical image in which the bleeding area exists, but comparing with a non-bleeding surgical image in which the bleeding area does not exist.
  • the computer obtains pixel information or volume information (depth information) of the portion corresponding to the bleeding region of the bleeding surgical image in the non-bleeding surgical image, and pixel information or volume information of the bleeding region of the bleeding surgical image (Depth information) can be compared. Therefore, the computer can determine accurate pixel information or volume information (depth information) at the time of non-bleeding by comparing the two images, thereby calculating a more accurate amount of bleeding in the bleeding area.
  • the computer may calculate the amount of bleeding using pixel information of the bleeding region in the surgical image, the amount of bleeding using volume information (depth information), or the gauze information according to the type of operation, the surgical site, and the like.
  • the amount of bleeding may be calculated or a three dimensional surgical simulation model may be used to determine whether the amount of bleeding is calculated. For example, in the case of gastric cancer surgery, since a lot of bleeding occurs, the surgery is continuously performed by absorbing the bleeding site using gauze. In this case, since the volume measurement in the bleeding area is difficult, the computer can calculate the amount of bleeding by reflecting the number of gauze used and the color change as the gauze absorbs blood. In the case of rectal cancer surgery, because of the large distribution of blood vessels around the bleeding occurs little by little.
  • the medical staff performs multiple times of irrigation (i.e., watering) to identify the bleeding area, so the volume measurement in the bleeding area cannot calculate the exact amount of bleeding.
  • the computer can calculate the amount of bleeding by reflecting the color change of the pixel in the bleeding area.
  • a learning model specialized for bleeding may be provided by estimating the bleeding region in the surgical image and calculating the amount of bleeding.
  • the bleeding area and the amount of bleeding can be grasped from the surgical image, it is possible to estimate the total bleeding occurred in a specific surgery based on this, and furthermore, the criteria for performing the evaluation of the surgery through the bleeding degree Can provide.
  • FIG. 4 is a diagram schematically showing the configuration of an apparatus 100 for performing a bleeding evaluation method using a surgical image according to an embodiment of the present invention.
  • the processor 110 may include a connection passage (for example, a bus or the like) that transmits and receives signals with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
  • a connection passage for example, a bus or the like
  • the processor 110 executes one or more instructions stored in the memory 120 to perform a bleeding evaluation method using the surgical image described with reference to FIGS. 1 to 3.
  • the processor 110 acquires a surgical image by executing one or more instructions stored in the memory 120, recognizes whether there is a bleeding region in the surgical image based on deep learning-based learning, and recognizes the result.
  • the location of the bleeding region can be estimated from the surgery image.
  • the processor 110 may read random access memory (RAM) and read-only memory (ROM) for temporarily and / or permanently storing a signal (or data) processed in the processor 110. , Not shown) may be further included.
  • the processor 110 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
  • SoC system on chip
  • the memory 120 may store programs (one or more instructions) for processing and controlling the processor 110. Programs stored in the memory 120 may be divided into a plurality of modules according to functions.
  • the bleeding evaluation method using the surgical image according to the embodiment of the present invention described above may be implemented as a program (or application) to be executed in combination with a computer which is hardware and stored in a medium.
  • the above-described program includes C, C ++, JAVA, machine language, etc. which can be read by the computer's processor (CPU) through the computer's device interface so that the computer reads the program and executes the methods implemented as the program.
  • Code may be coded in the computer language of. Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedures related to execution procedures necessary for the computer's processor to execute the functions according to a predetermined procedure. can do.
  • the code may further include memory reference code for additional information or media required for the computer's processor to execute the functions at which location (address address) of the computer's internal or external memory should be referenced. have.
  • the code may be used to communicate with any other computer or server remotely using the communication module of the computer. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
  • the stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device.
  • examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the computer can access or various recording media on the computer of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • flash memory hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé par lequel un ordinateur évalue un saignement en utilisant une image chirurgicale. Le procédé comprend les étapes consistant à : acquérir une image chirurgicale; reconnaître si une région de saignement existe dans l'image chirurgicale sur la base d'un apprentissage basé sur un apprentissage profond; et estimer l'emplacement de la région de saignement à partir de l'image chirurgicale sur la base du résultat de reconnaissance.
PCT/KR2019/002095 2018-02-20 2019-02-20 Procédé et dispositif d'évaluation de saignement par utilisation d'une image chirurgicale WO2019164277A1 (fr)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
KR10-2018-0019868 2018-02-20
KR20180019868 2018-02-20
KR10-2018-0019867 2018-02-20
KR20180019867 2018-02-20
KR10-2018-0019866 2018-02-20
KR20180019866 2018-02-20
KR10-2018-0129709 2018-10-29
KR1020180129709A KR102014364B1 (ko) 2018-02-20 2018-10-29 수술영상을 이용한 출혈 평가 방법 및 장치

Publications (1)

Publication Number Publication Date
WO2019164277A1 true WO2019164277A1 (fr) 2019-08-29

Family

ID=67687818

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/002095 WO2019164277A1 (fr) 2018-02-20 2019-02-20 Procédé et dispositif d'évaluation de saignement par utilisation d'une image chirurgicale

Country Status (1)

Country Link
WO (1) WO2019164277A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113761776A (zh) * 2021-08-24 2021-12-07 中国人民解放军总医院第一医学中心 基于增强现实的心脏出血与止血模型的仿真系统和方法
CN115761365A (zh) * 2022-11-28 2023-03-07 首都医科大学附属北京友谊医院 术中出血状况的确定方法、装置和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080020652A (ko) * 2005-06-01 2008-03-05 올림푸스 메디칼 시스템즈 가부시키가이샤 내시경 진단 지원 방법, 내시경 진단 지원 장치 및 내시경 진단 지원 프로그램을 기록한 기록매체
JP4504951B2 (ja) * 2001-03-14 2010-07-14 ギブン イメージング リミテッド 生体内での比色分析の異常を検出するための方法およびシステム
JP2011036371A (ja) * 2009-08-10 2011-02-24 Tohoku Otas Kk 医療画像記録装置
KR101175065B1 (ko) * 2011-11-04 2012-10-12 주식회사 아폴로엠 수술용 영상 처리 장치를 이용한 출혈 부위 검색 방법
JP2016039874A (ja) * 2014-08-13 2016-03-24 富士フイルム株式会社 内視鏡画像診断支援装置、システム、方法およびプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4504951B2 (ja) * 2001-03-14 2010-07-14 ギブン イメージング リミテッド 生体内での比色分析の異常を検出するための方法およびシステム
KR20080020652A (ko) * 2005-06-01 2008-03-05 올림푸스 메디칼 시스템즈 가부시키가이샤 내시경 진단 지원 방법, 내시경 진단 지원 장치 및 내시경 진단 지원 프로그램을 기록한 기록매체
JP2011036371A (ja) * 2009-08-10 2011-02-24 Tohoku Otas Kk 医療画像記録装置
KR101175065B1 (ko) * 2011-11-04 2012-10-12 주식회사 아폴로엠 수술용 영상 처리 장치를 이용한 출혈 부위 검색 방법
JP2016039874A (ja) * 2014-08-13 2016-03-24 富士フイルム株式会社 内視鏡画像診断支援装置、システム、方法およびプログラム

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113761776A (zh) * 2021-08-24 2021-12-07 中国人民解放军总医院第一医学中心 基于增强现实的心脏出血与止血模型的仿真系统和方法
CN115761365A (zh) * 2022-11-28 2023-03-07 首都医科大学附属北京友谊医院 术中出血状况的确定方法、装置和电子设备
CN115761365B (zh) * 2022-11-28 2023-12-01 首都医科大学附属北京友谊医院 术中出血状况的确定方法、装置和电子设备

Similar Documents

Publication Publication Date Title
KR102014364B1 (ko) 수술영상을 이용한 출혈 평가 방법 및 장치
JP7058373B2 (ja) 医療画像に対する病変の検出及び位置決め方法、装置、デバイス、及び記憶媒体
WO2020207377A1 (fr) Procédé, dispositif et système de formation de modèle de reconnaissance d'image et de reconnaissance d'image
WO2022050473A1 (fr) Appareil et procédé d'estimation de pose de caméra
US20220051405A1 (en) Image processing method and apparatus, server, medical image processing device and storage medium
CN110046551A (zh) 一种人脸识别模型的生成方法及设备
WO2019172498A1 (fr) Système de diagnostic assisté par ordinateur pour indiquer la malignité d'une tumeur, et base de déduction de la malignité et procédé associés
WO2022252908A1 (fr) Procédé et appareil de reconnaissance d'objet, dispositif informatique et support de stockage
WO2022089257A1 (fr) Procédé de traitement d'image médicale, appareil, dispositif, support de stockage informatique et produit
WO2021071288A1 (fr) Procédé et dispositif de formation de modèle de diagnostic de fracture
WO2021093011A1 (fr) Procédé de prise de décision de conduite de véhicule sans pilote, dispositif de prise de décision de conduite de véhicule sans pilote et véhicule sans pilote
WO2019164277A1 (fr) Procédé et dispositif d'évaluation de saignement par utilisation d'une image chirurgicale
WO2021137454A1 (fr) Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur
CN110472737A (zh) 神经网络模型的训练方法、装置和医学图像处理系统
WO2019143021A1 (fr) Procédé de prise en charge de visualisation d'images et appareil l'utilisant
WO2020143165A1 (fr) Procédé et système de reconnaissance d'image reproduite, et dispositif terminal
WO2019143179A1 (fr) Procédé de détection automatique de mêmes régions d'intérêt entre des images du même objet prises à un intervalle de temps, et appareil ayant recours à ce procédé
CN111126268A (zh) 关键点检测模型训练方法、装置、电子设备及存储介质
WO2023136695A1 (fr) Appareil et procédé pour la génération d'un modèle de poumon virtuel de patient
WO2023113285A1 (fr) Procédé de gestion d'images de corps et appareil l'utilisant
WO2019164273A1 (fr) Méthode et dispositif de prédiction de temps de chirurgie sur la base d'une image chirurgicale
WO2023029348A1 (fr) Procédé d'étiquetage d'instance d'image basé sur l'intelligence artificielle, et dispositif associé
WO2020159276A1 (fr) Appareil d'analyse chirurgicale et système, procédé et programme pour analyser et reconnaître une image chirurgicale
WO2021093744A1 (fr) Procédé et appareil de mesure du diamètre d'une pupille et support de stockage lisible par ordinateur
WO2020230972A1 (fr) Procédé d'amélioration des performances de reproduction d'un modèle de réseau neuronal profond entraîné et dispositif l'utilisant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19756745

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19756745

Country of ref document: EP

Kind code of ref document: A1