WO2021206518A1 - Procédé et système d'analyse d'un intervention chirurgicale après une opération - Google Patents

Procédé et système d'analyse d'un intervention chirurgicale après une opération Download PDF

Info

Publication number
WO2021206518A1
WO2021206518A1 PCT/KR2021/004533 KR2021004533W WO2021206518A1 WO 2021206518 A1 WO2021206518 A1 WO 2021206518A1 KR 2021004533 W KR2021004533 W KR 2021004533W WO 2021206518 A1 WO2021206518 A1 WO 2021206518A1
Authority
WO
WIPO (PCT)
Prior art keywords
surgical
bleeding
image
recognized
bleeding event
Prior art date
Application number
PCT/KR2021/004533
Other languages
English (en)
Korean (ko)
Inventor
임자연
김하진
이송아
박성현
Original Assignee
(주)휴톰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)휴톰 filed Critical (주)휴톰
Publication of WO2021206518A1 publication Critical patent/WO2021206518A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/14Special procedures for taking photographs; Apparatus therefor for taking photographs during medical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • A61B2034/254User interfaces for surgical systems being adapted depending on the stage of the surgical procedure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • A61B2034/256User interfaces for surgical systems having a database of accessory information, e.g. including context sensitive help or scientific articles

Definitions

  • the present invention relates to a method and system for analyzing a post-operative procedure. More specifically, this case recognizes each phase of the post-operative image through a learning model, and counts the switching result of the recognized step-by-step surgical image, the required time, and the number of occurrences of bleeding and provides it to the user It relates to a system and method for
  • doctors plan a patient's surgery before and after surgery, they refer to a two-dimensional medical image such as a computed tomographic (CT) or magnetic resonance imaging (MRI) image of the patient.
  • CT computed tomographic
  • MRI magnetic resonance imaging
  • organ imaging information There are many limitations in determining the location of the lesion and the distribution of surrounding blood vessels.
  • Deep learning is defined as a set of machine learning algorithms that attempt high-level abstractions (summarizing key contents or functions in large amounts of data or complex data) through a combination of several nonlinear transformation methods. Deep learning can be viewed as a field of machine learning that teaches computers to think in a broad framework.
  • each individual surgical stage is recognized by using the surgical stage learning model. Separating the time domain corresponding to each recognized surgical step afterward, calculating the number of switching times and required time for each surgical step, and determining whether a bleeding event occurs for each surgical step;
  • the method may include providing a user interface capable of analyzing the surgical result for each operation step based on the number of switching times, the required time, and the bleeding event.
  • the step of calculating the number of switching may include counting the number of times the recognized surgical step is not followed and disconnected.
  • the step of determining whether the bleeding event occurs is based on deep learning-based learning, determining whether or not a bleeding event occurs in frame units in the entire surgical image taken by using the separated time domain. It may include the step of linking the occurrence of the bleeding event to the operation stage recognition result.
  • the step of determining whether the bleeding event occurs is a step of determining whether the bleeding is interference with the operation based on the calculated amount of bleeding, and when the bleeding is not interfered with the operation, it is recognized as a first bleeding event and recognizing as a second bleeding event when the bleeding interfered with the operation.
  • the computer program includes the entire surgical image captured In, a process of recognizing each individual surgical stage using the surgical stage learning model and then separating a time domain corresponding to each recognized surgical stage, a process of calculating the number of switching times and the required time for each stage of the operation; A process of determining whether a bleeding event has occurred for each operation stage and a process of providing a user interface capable of analyzing a surgical result for each operation stage based on the number of switching times, required time, and bleeding event may be performed.
  • a medical imaging device for taking a surgical image
  • a display unit for providing a surgical process analysis result to a user
  • the one or more processors include a controller including one or more memories stored therein instructions for performing an operation, the operation performed by the controller is performed in the entire surgical image taken, the surgical step learning model After recognizing each surgical step using It may include an operation for determining whether or not , and an operation for providing a user interface capable of analyzing the surgical result for each operation stage based on the number of switching times, the required time, and the bleeding event.
  • the operation of calculating the number of switching may include an operation of counting the number of times the recognized surgical step is not continued and disconnected.
  • “virtual surgical data” refers to data including rehearsal or simulation actions performed on a virtual body model. “Virtual surgical data” may be image data on which rehearsal or simulation is performed on a virtual body model in a virtual space, or data recorded on a surgical operation performed on the virtual body model. In addition, “virtual surgery data” may include learning data for learning the surgical learning model.
  • actual surgical data refers to data obtained by actual medical staff performing surgery.
  • Standard data may be image data of a surgical site taken in an actual surgical procedure, or may be data recorded on a surgical operation performed in an actual surgical procedure.
  • a surgical phase refers to a basic phase that is sequentially performed in the entire operation of a specific type of operation.
  • the term “computer” includes various devices capable of providing a result to a user by performing arithmetic processing.
  • computers include desktop PCs, notebooks (Note Books) as well as smart phones, tablet PCs, cellular phones, PCS phones (Personal Communication Service phones), synchronous/asynchronous A mobile terminal of International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may correspond to a server that receives a request from a client and performs information processing.
  • FIG. 1 is a view showing a robotic surgery system according to an embodiment of the present invention.
  • FIG. 1 a schematic diagram of a system capable of performing robotic surgery according to an embodiment is shown.
  • the robotic surgery system includes a medical image capturing device 10 , a server 20 and a controller 30 provided in an operating room, an image capturing unit 36 , a display 32 and a surgical robot 34 . do.
  • the medical imaging equipment 10 may be omitted from the robotic surgery system according to an embodiment.
  • the robotic surgery is performed by the user using the control unit 30 to control the surgical robot (34). In one embodiment, the robotic surgery may be performed automatically by the control unit 30 without the user's control.
  • the server 20 is a computing device including at least one processor, a memory, and a communication unit.
  • the control unit 30 includes a computing device including at least one processor, a memory, and a communication unit.
  • the controller 30 includes hardware and software interfaces for controlling the surgical robot 34 .
  • the image capturing unit 36 includes at least one image sensor. That is, the image capturing unit 36 includes at least one camera device, and is used to photograph the surgical site. In one embodiment, the imaging unit 36 is used in combination with the surgical robot (34). For example, the imaging unit 36 may include at least one camera coupled to the surgical arm of the surgical robot 34 .
  • the image captured by the image capturing unit 36 is displayed on the display 340 .
  • the server 20 generates information necessary for robotic surgery by using medical image data of an object (patient) photographed in advance from the medical imaging device 10 , and provides the generated information to the controller 30 .
  • each step is described as being performed by a “computer” for convenience of explanation, but the subject performing each step is not limited to a specific device, and all or part of it is performed by the server 20 or the control unit 30 can be performed.
  • the surgical image captured by the medical imaging device 10 may be divided according to various criteria. For example, the surgical image may be divided based on the type of object included in the image. The segmentation method based on the type of object requires the computer to recognize each object.
  • Objects recognized in the surgical image largely include a human body, an object introduced from the outside, and an object created by itself.
  • the human body includes body parts that are imaged by medical imaging (eg, CT) prior to surgery and body parts that are not imaged.
  • body parts photographed by medical imaging include organs, blood vessels, bones, tendons, and the like, and these body parts may be recognized based on a 3D modeling image generated based on the medical image.
  • the position, size, shape, etc. of each body part are recognized in advance by a 3D analysis method based on a medical image.
  • the computer defines an algorithm that can determine the position of the body part corresponding to the surgical image in real time, and based on this, information on the position, size, shape, etc. of each body part included in the surgical image without performing separate image recognition can be obtained.
  • the objects introduced from the outside include, for example, surgical tools, gauze, clips, and the like. Since it has preset morphological characteristics, the computer can recognize it in real time through image analysis during surgery.
  • the object generated inside includes, for example, bleeding occurring in a body part. This can be recognized in real time by the computer through image analysis during surgery.
  • the movement of organs or omentum included in the body part, and the cause of the internal creation of an object, are all caused by the movement of an object introduced from the outside.
  • the surgical image may be divided into several surgical phases based on the movement of each object.
  • the surgical image may be divided based on a movement of an externally introduced object, that is, an action.
  • the computer can recognize the type of each action and further recognize the cause of each action.
  • the computer can segment the surgical image based on the recognized action, and can recognize from each detailed surgical operation to the type of the entire operation through stepwise segmentation. For example, a computer extracts feature information from a surgical image taken based on a surgical stage learning model that performs machine learning of a convolutional neural network method, and generates images for each surgical stage based on the feature information. It can be divided or recognized at which stage of surgery it is currently located.
  • the computer may determine a predefined type of operation corresponding to the operation image from the determination of the action.
  • determining the type of surgery information on the entire surgical process may be acquired.
  • one surgical process may be selected according to a selection of a doctor or based on actions recognized up to a specific time point.
  • the computer may recognize and predict the surgical stage based on the acquired surgical process. For example, when a specific step is recognized in a series of surgical processes, the subsequent steps may be predicted or candidates for possible steps may be selected. Therefore, it is possible to greatly reduce the error rate of surgical image recognition caused by omentum.
  • a surgical error situation has occurred. For example, if a surgical stage switching that deviates from a predetermined surgical process frequently occurs, it may be recognized that a surgical error situation has occurred.
  • the computer extracts navigation information about the main vessels corresponding to each surgical stage and the vessels branched to the main vessels according to the blood flow, thereby assisting the user for effective surgery. can do.
  • the computer may determine whether bleeding has occurred due to a surgical error based on the image recognition of the surgical stage.
  • the computer can determine the location, time, and magnitude of each bleeding. It can also determine whether surgery should be stopped due to bleeding. Accordingly, according to an embodiment, the computer may be used to provide data on an error situation and bleeding situation as a surgical result report, to exclude unnecessary operations or mistakes in the surgical process, and to streamline the surgical process.
  • the computation performed on the computer includes: an operation for recognizing individual surgical steps in the captured overall surgical image, using a surgical step learning model; Calculation of the number of switching times and required time for each recognized surgical step; a calculation for determining whether a bleeding event has occurred for each recognized surgical stage; and an operation of providing a user interface capable of analyzing surgical results for each operation stage based on the number of switching times, required time, and bleeding events.
  • the operation of calculating the number of switching may include an operation of counting the number of times the recognized surgical step is not followed and disconnected.
  • FIG. 2 is a flowchart for explaining a surgical procedure analysis method according to an embodiment of the present invention.
  • the trained surgical stage learning model can be used to more accurately recognize the surgical stage in the surgical image to be analyzed after surgery.
  • step 5 eg, a step in which partial omentectomy should be performed in gastric cancer surgery
  • step 6 at least 6 switching occurs in order to move to another step.
  • the surgical image dataset may be data learned using machine learning methods such as supervised learning, unsupervised learning, and reinforcement learning. Therefore, the computer acquires a surgical image dataset as learning data, and uses it to perform learning to recognize the presence or absence of a bleeding area (ie, bleeding region) in a surgical image to learn a learning model (eg, bleeding presence recognition model). can be built in advance.
  • machine learning methods such as supervised learning, unsupervised learning, and reinforcement learning. Therefore, the computer acquires a surgical image dataset as learning data, and uses it to perform learning to recognize the presence or absence of a bleeding area (ie, bleeding region) in a surgical image to learn a learning model (eg, bleeding presence recognition model).
  • a learning model eg, bleeding presence recognition model
  • the computer acquires a new surgical image (that is, the surgical image in step S600)
  • the bleeding region in the new surgical image is created using a learning model (eg, bleeding recognition model) learned based on the surgical image dataset. It can be recognized whether it exists or not.
  • a learning model eg, bleeding recognition model
  • the computer when the computer recognizes that the bleeding area in the surgical image is a bleeding image based on deep learning-based learning, it can specify the bleeding area in the surgical image and estimate the location of the specified bleeding area. .
  • the computer may calculate the amount of bleeding in the bleeding area based on the location of the bleeding area estimated in step S620 (S630).
  • the computer may recognize a case in which bleeding collected tissue is recognized or a bleeding stained tissue is detected as the second bleeding event.
  • the user can receive data on the error situation and bleeding situation for each operation step as a surgical result report, exclude unnecessary motions or mistakes in the surgical process, and use it to streamline the surgical process.
  • FIG. 8 is a view showing an example of a surgical procedure analysis result according to an embodiment of the present invention.
  • the user interface 800 exposes a video playback area 801 , an area 802 for displaying a summary of the entire surgical procedure, and analysis data for the corresponding portion when the video is played. It may include an area 803 for providing video-related additional functions and an area 804 for providing additional functions related to moving pictures.
  • the image for the selected surgical step may be reproduced at 0.5 to 7 times speed.
  • the image for the selected surgical step may be reproduced at 0.5 to 7 times speed.
  • the bleeding point may be displayed during the entire operation time, and a video of the corresponding part may be played by the user's selection.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Quality & Reliability (AREA)

Abstract

L'invention concerne un procédé et un système d'analyse d'une intervention chirurgicale après une opération. Le procédé d'analyse d'une intervention chirurgicale après une opération, qui est réalisé par un système selon un mode de réalisation de l'invention, peut comprendre les étapes suivantes : reconnaissance, à partir d'une image chirurgicale entière capturée, de chacune des phases chirurgicales individuelles en utilisant un modèle d'apprentissage de phase chirurgicale, puis séparation d'un domaine temporel correspondant à chacune des phases chirurgicales reconnues ; calcul du nombre de commutations et du temps passé pour chaque phase chirurgicale ; détermination si un événement de saignement a eu lieu pour chaque phase chirurgicale ; et fourniture d'une interface utilisateur capable d'analyser un résultat chirurgical pour chaque phase chirurgicale, sur la base du nombre de commutations, du temps passé et de l'événement de saignement.
PCT/KR2021/004533 2020-04-10 2021-04-09 Procédé et système d'analyse d'un intervention chirurgicale après une opération WO2021206518A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200043791A KR102321157B1 (ko) 2020-04-10 2020-04-10 수술 후 수술과정 분석 방법 및 시스템
KR10-2020-0043791 2020-04-10

Publications (1)

Publication Number Publication Date
WO2021206518A1 true WO2021206518A1 (fr) 2021-10-14

Family

ID=78022925

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/004533 WO2021206518A1 (fr) 2020-04-10 2021-04-09 Procédé et système d'analyse d'un intervention chirurgicale après une opération

Country Status (2)

Country Link
KR (4) KR102321157B1 (fr)
WO (1) WO2021206518A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023102880A1 (fr) * 2021-12-10 2023-06-15 曾稼志 Procédé et système de traitement d'images d'intubation trachéale et procédé d'évaluation de l'efficacité d'intubation trachéale
WO2023230114A1 (fr) * 2022-05-26 2023-11-30 Verily Life Sciences Llc Module de reconnaissance de phase chirurgicale à deux étages basé sur la vision artificielle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101175065B1 (ko) * 2011-11-04 2012-10-12 주식회사 아폴로엠 수술용 영상 처리 장치를 이용한 출혈 부위 검색 방법
KR20120126679A (ko) * 2011-05-12 2012-11-21 주식회사 이턴 수술 상황 판단 및 대응을 위한 수술 로봇 시스템의 제어 방법과 이를 기록한 기록매체 및 수술 로봇 시스템
KR101862360B1 (ko) * 2017-12-28 2018-06-29 (주)휴톰 수술결과에 대한 피드백 제공방법 및 프로그램
JP2018529134A (ja) * 2015-06-02 2018-10-04 寛 陳 ディープラーニングに基づく医療データ分析方法及びそのインテリジェントアナライザー
KR20190080705A (ko) * 2018-05-23 2019-07-08 (주)휴톰 수술결과에 대한 피드백 제공방법 및 프로그램
KR20190100011A (ko) * 2018-02-20 2019-08-28 (주)휴톰 수술영상을 이용한 수술정보 제공 방법 및 장치

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160259888A1 (en) * 2015-03-02 2016-09-08 Sony Corporation Method and system for content management of video images of anatomical regions
US9836654B1 (en) * 2017-02-28 2017-12-05 Kinosis Ltd. Surgical tracking and procedural map analysis tool
KR101926123B1 (ko) 2017-12-28 2018-12-06 (주)휴톰 수술영상 분할방법 및 장치
US11969216B2 (en) 2017-12-28 2024-04-30 Cilag Gmbh International Surgical network recommendations from real time analysis of procedure variables against a baseline highlighting differences from the optimal solution
WO2019181432A1 (fr) * 2018-03-20 2019-09-26 ソニー株式会社 Système d'assistance opératoire, dispositif de traitement d'informations et programme

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120126679A (ko) * 2011-05-12 2012-11-21 주식회사 이턴 수술 상황 판단 및 대응을 위한 수술 로봇 시스템의 제어 방법과 이를 기록한 기록매체 및 수술 로봇 시스템
KR101175065B1 (ko) * 2011-11-04 2012-10-12 주식회사 아폴로엠 수술용 영상 처리 장치를 이용한 출혈 부위 검색 방법
JP2018529134A (ja) * 2015-06-02 2018-10-04 寛 陳 ディープラーニングに基づく医療データ分析方法及びそのインテリジェントアナライザー
KR101862360B1 (ko) * 2017-12-28 2018-06-29 (주)휴톰 수술결과에 대한 피드백 제공방법 및 프로그램
KR20190100011A (ko) * 2018-02-20 2019-08-28 (주)휴톰 수술영상을 이용한 수술정보 제공 방법 및 장치
KR20190080705A (ko) * 2018-05-23 2019-07-08 (주)휴톰 수술결과에 대한 피드백 제공방법 및 프로그램

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023102880A1 (fr) * 2021-12-10 2023-06-15 曾稼志 Procédé et système de traitement d'images d'intubation trachéale et procédé d'évaluation de l'efficacité d'intubation trachéale
WO2023230114A1 (fr) * 2022-05-26 2023-11-30 Verily Life Sciences Llc Module de reconnaissance de phase chirurgicale à deux étages basé sur la vision artificielle

Also Published As

Publication number Publication date
KR20210133198A (ko) 2021-11-05
KR102321157B1 (ko) 2021-11-04
KR102376161B9 (ko) 2024-01-11
KR102376161B1 (ko) 2022-03-18
KR20240011228A (ko) 2024-01-25
KR20220055457A (ko) 2022-05-03
KR102628324B1 (ko) 2024-01-23
KR20210126806A (ko) 2021-10-21

Similar Documents

Publication Publication Date Title
KR102014385B1 (ko) 수술영상 학습 및 학습 기반 수술동작 인식 방법 및 장치
US11908188B2 (en) Image analysis method, microscope video stream processing method, and related apparatus
WO2019132168A1 (fr) Système d'apprentissage de données d'images chirurgicales
WO2021206518A1 (fr) Procédé et système d'analyse d'un intervention chirurgicale après une opération
KR101926123B1 (ko) 수술영상 분할방법 및 장치
WO2016126056A1 (fr) Appareil et procédé de fourniture d'informations médicales
WO2019132165A1 (fr) Procédé et programme de fourniture de rétroaction sur un résultat chirurgical
JP2010075403A (ja) 情報処理装置およびその制御方法、データ処理システム
WO2016125978A1 (fr) Procédé et appareil d'affichage d'image médical
WO2021206517A1 (fr) Procédé et système de navigation vasculaire peropératoire
WO2019132244A1 (fr) Procédé de génération d'informations de simulation chirurgicale et programme
KR102146672B1 (ko) 수술결과에 대한 피드백 제공방법 및 프로그램
WO2019164277A1 (fr) Procédé et dispositif d'évaluation de saignement par utilisation d'une image chirurgicale
WO2020159276A1 (fr) Appareil d'analyse chirurgicale et système, procédé et programme pour analyser et reconnaître une image chirurgicale
WO2018147674A1 (fr) Appareil et procédé de diagnostic d'état médical sur la base d'une image médicale
WO2022108387A1 (fr) Procédé et dispositif permettant de générer des données de dossier clinique
CN112885435B (zh) 图像目标区域的确定方法、装置和系统
WO2019164278A1 (fr) Procédé et dispositif permettant d'obtenir des informations chirurgicales à l'aide d'une image chirurgicale
WO2023234626A1 (fr) Appareil et procédé pour générer un modèle 3d pour un organe et un vaisseau sanguin selon le type de chirurgie
KR20190133424A (ko) 수술결과에 대한 피드백 제공방법 및 프로그램
WO2023287077A1 (fr) Système chirurgical à intelligence artificielle et son procédé de commande
WO2023018259A1 (fr) Procédé et appareil de diagnostic pour diagnostiquer à distance une maladie de la peau à l'aide de la réalité augmentée et de la réalité virtuelle
WO2021118068A1 (fr) Procédé de génération d'images médicales et dispositif l'utilisant
WO2021251776A1 (fr) Procédé de suivi d'une tumeur dans une image de tdm et système de diagnostic l'utilisant
CN116364265B (zh) 一种医用内窥镜图像优化系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21785127

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21785127

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.03.23)

122 Ep: pct application non-entry in european phase

Ref document number: 21785127

Country of ref document: EP

Kind code of ref document: A1