WO2019164278A1 - Procédé et dispositif permettant d'obtenir des informations chirurgicales à l'aide d'une image chirurgicale - Google Patents

Procédé et dispositif permettant d'obtenir des informations chirurgicales à l'aide d'une image chirurgicale Download PDF

Info

Publication number
WO2019164278A1
WO2019164278A1 PCT/KR2019/002096 KR2019002096W WO2019164278A1 WO 2019164278 A1 WO2019164278 A1 WO 2019164278A1 KR 2019002096 W KR2019002096 W KR 2019002096W WO 2019164278 A1 WO2019164278 A1 WO 2019164278A1
Authority
WO
WIPO (PCT)
Prior art keywords
surgical
surgery
organ
image
information
Prior art date
Application number
PCT/KR2019/002096
Other languages
English (en)
Korean (ko)
Inventor
이종혁
형우진
양훈모
김호승
어수행
Original Assignee
(주)휴톰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020180145177A external-priority patent/KR20190100011A/ko
Application filed by (주)휴톰 filed Critical (주)휴톰
Publication of WO2019164278A1 publication Critical patent/WO2019164278A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image

Definitions

  • the present invention relates to a method and apparatus for providing surgical information using a surgical image.
  • Laparoscopic surgery refers to surgery performed by medical staff to see and touch the part to be treated.
  • Minimally invasive surgery is also known as keyhole surgery, and laparoscopic surgery and robotic surgery are typical.
  • laparoscopic surgery a small hole is made in a necessary part without opening, and a laparoscopic with a special camera is attached and a surgical tool is inserted into the body and observed through a video monitor.
  • Microsurgery is performed using a laser or a special instrument.
  • robot surgery is to perform minimally invasive surgery using a surgical robot.
  • radiation surgery refers to surgical treatment with radiation or laser light outside the body.
  • the surgical image is obtained during the actual surgery and the surgery is performed based on this. Therefore, it is important to provide the surgical site and various information related thereto through the surgical image acquired during the actual surgery.
  • the problem to be solved by the present invention is to provide a method and apparatus for providing surgical information using a surgical image.
  • the problem to be solved by the present invention is to provide a method and apparatus for estimating the observable organs in the surgical image by identifying the current surgical stage through the surgical image, and providing information on the estimated organs.
  • the problem to be solved by the present invention is to provide a method and apparatus for providing the same simulation environment as the actual progress of the operation through the surgical image.
  • the problem to be solved by the present invention is to provide a method and apparatus for providing information of organs in a more significant surgical image by analyzing the surgical image using deep learning.
  • Surgical information providing method using a surgical image performed by a computer obtaining a surgical image for a specific operation, recognizing the operation step in the specific surgery corresponding to the surgical image Estimating an organ candidate group including at least one organ that can be extracted in the operation step, and specifying an organ region in the surgical image based on a positional relationship between organs in the organ candidate group. do.
  • the step of recognizing the operation step the step of performing the learning based on at least one surgical image of the previous time and the surgical image of the current time point, for the surgical image through the learning Deriving context information, and recognizing a surgery step corresponding to the surgery image among the surgery steps determined for the specific surgery based on the context information on the surgery image.
  • the step of recognizing the operation step when the specific surgery comprises a hierarchical structure including the lowest hierarchy to the highest hierarchy, any one operation operation belonging to a specific hierarchy on the hierarchy It may be recognized as a surgical step corresponding to the surgical image.
  • specifying the organ region in the surgical image learning the positional relationship between the organs in the organ candidate group, calculating the position information of the organs included in the surgical image, And specifying an organ region in which the organ exists in the surgical image based on the location information of the organ.
  • the step of specifying the organ region in the surgical image by learning the texture (texture) information on the organs in the organ candidate population, the texture information of the organs included in the surgical image
  • the method may further include detecting an organ region corresponding to the texture information of the organ in the surgical image based on the calculation and the location information of the organ.
  • the method may further include displaying the specific organ region on the surgical image and providing the same to the user.
  • the virtual body model is The 3D modeling data may be generated based on medical image data photographing the inside of the body of the specific surgery subject.
  • the method may further include generating cue sheet data for the specific surgery by obtaining simulation data.
  • An apparatus includes a memory for storing one or more instructions, and a processor for executing the one or more instructions stored in the memory, wherein the processor executes the one or more instructions to perform a particular operation.
  • a computer program according to an embodiment of the present invention is combined with a computer, which is hardware, and stored in a computer-readable recording medium to perform a method for providing surgical information using the surgical image.
  • the present invention it is possible to accurately determine the operation stage in the surgery currently being performed through the surgical image. In addition, it is effective in estimating the observable organs in the surgical image by identifying the current stage of surgery, thereby improving the long-term recognition rate.
  • the present invention it is possible to accurately determine the position and shape of the actual organs through the surgical image.
  • the location and shape of these organs can be used to provide more meaningful information to the medical staff performing the surgery.
  • the present invention by accurately specifying the position and shape of the actual organs through the surgical image it is possible to implement the same simulation environment as the actual operation is in progress.
  • medical staff can perform rehearsal or virtual surgery more accurately and effectively, thereby improving the learning effect of medical staff.
  • FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
  • FIGS. 2 and 3 are flowcharts illustrating a method for providing surgical information using a surgical image according to an embodiment of the present invention.
  • FIG. 4 is an example showing a process of recognizing a surgical step based on a surgical image according to an embodiment of the present invention.
  • FIG. 5 is an example illustrating a process of specifying an organ region in a surgical image based on information about an organ in an organ candidate group according to an embodiment of the present invention.
  • FIG. 6 is a view schematically showing the configuration of an apparatus 200 for performing a method for providing surgical information using a surgical image according to an embodiment of the present invention.
  • a “part” or “module” refers to a hardware component such as software, FPGA, or ASIC, and the “part” or “module” plays certain roles. However, “part” or “module” is not meant to be limited to software or hardware.
  • the “unit” or “module” may be configured to be in an addressable storage medium or may be configured to play one or more processors.
  • a “part” or “module” may include components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, Procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Functions provided within components and “parts” or “modules” may be combined into smaller numbers of components and “parts” or “modules” or into additional components and “parts” or “modules”. Can be further separated.
  • a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
  • a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
  • a head mounted display (HMD) device includes a computing function
  • the HMD device may be a computer.
  • the computer may correspond to a server that receives a request from a client and performs information processing.
  • FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
  • the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
  • the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
  • surgical robot 34 includes imaging device 36 and surgical instrument 38.
  • the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
  • the server 100 is a computing device including at least one processor and a communication unit.
  • the controller 30 includes a computing device including at least one processor and a communication unit.
  • the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
  • the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
  • the image photographed by the photographing apparatus 36 is displayed on the display 340.
  • surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
  • Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
  • the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
  • the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
  • the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
  • the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
  • the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
  • the present invention is to provide more meaningful information to the medical staff undergoing surgery by using the surgical image or the surgical data that can be obtained in the surgical process.
  • Computer may mean the server 100 or the controller 30 of FIG. 1, but is not limited thereto and may be used to encompass a device capable of performing computing processing.
  • the computer may be a computing device provided separately from the device shown in FIG. 1.
  • embodiments disclosed below may not be applicable only in connection with the robotic surgery system illustrated in FIG. 1, but may be applied to all kinds of embodiments that may acquire and utilize a surgical image in a surgical procedure.
  • FIGS. 2 and 3 are flowcharts illustrating a method for providing surgical information using a surgical image according to an embodiment of the present invention.
  • obtaining a surgical image for a specific operation (S100)
  • the specific surgery corresponding to the surgical image Recognizing a surgical step in (S200)
  • estimating the organ candidate group including at least one organ that can be extracted in the surgical step (S300)
  • based on the positional relationship between organs in the organ candidate group And specifying an organ region in the surgical image (S400).
  • the computer may acquire a surgical image for a specific surgery (S100).
  • the medical staff may perform the actual surgery on the subject directly, or perform minimally invasive surgery using a laparoscopic or endoscope as well as the surgical robot described in FIG. 1. .
  • the computer may acquire a surgical image photographing a scene including a surgical operation performed in the surgical procedure, a surgical tool related thereto, a surgical site, and the like.
  • the computer may acquire a surgical image photographing a scene including a surgical site and a surgical tool that is currently undergoing surgery from a camera that enters the body of the subject.
  • the surgical image may include one or more image frames.
  • Each image frame may represent a scene in which a surgical operation is performed, including a surgical site, a surgical tool, and the like of a surgical subject.
  • the surgical image may be composed of image frames in which a surgical operation is recorded for each scene (scene) according to time during the surgical procedure.
  • the surgical image may be composed of image frames that record each surgical scene according to the spatial movement such as the surgical site or the position of the camera during the surgery.
  • the computer may recognize a surgical workflow stage in a specific surgical process corresponding to the surgical image acquired in step S100 (S200).
  • the computer may perform learning using deep learning on a surgical image (eg, at least one image frame) acquired during a specific surgical procedure, and the context information of the surgical image through the learning. Can be derived.
  • the computer may recognize a surgery stage corresponding to the surgery image on a specific surgery procedure including at least one surgery stage based on the context information of the surgery image. A detailed process thereof will be described with reference to FIG. 4.
  • FIG. 4 is an example showing a process of recognizing a surgical step based on a surgical image according to an embodiment of the present invention.
  • the computer may learn a surgical image (ie, at least one image frame) acquired at a specific surgery.
  • the computer may perform learning using a convolutional neural network (CNN) for each image frame included in the surgical image (S210). For example, the computer may learn characteristics of the surgical image by inputting each image frame to at least one layer (eg, a convolution layer). As a result of this learning, the computer can infer what each image frame represents or represents.
  • CNN convolutional neural network
  • the computer may perform learning using a recurrent neural network (RNN) on the surgical image (that is, at least one image frame) derived as a result of learning using the CNN (S220).
  • the computer may receive the surgical image in units of frames and learn what the surgical image in the frame means by using an RNN (eg, an LSTM method).
  • RNN eg, an LSTM method
  • the computer receives a surgical image (eg, the first image frame) of the current view and at least one surgical image (eg, the second to nth image frames) of the previous view and uses the RNN (eg, LSTM). Learning may be performed and context information of the surgical image (ie, the first image frame of the current view) of the current view may be derived as the learning result.
  • the context information is information indicating what the surgical image means, and may include information related to a specific surgical operation in the surgical procedure.
  • the computer may recognize the meaning (ie, context information) of the surgery image based on the learning result of the surgery image, and recognize the surgery stage corresponding to the surgery image.
  • the computer may recognize the operation step for each image frame based on the context information for each image frame derived as a learning result of the RNN (S230).
  • the specific surgical process may be composed of at least one surgical step.
  • at least one surgical step may be predefined for each particular surgery.
  • each operation step may be classified according to the time course of a specific operation, or each operation step may be classified to correspond to each operation part based on the operation site.
  • each operation stage may be classified based on the position of the camera or the movement range of the camera during a specific operation, or each operation based on a change (eg, replacement) of a surgical tool during a specific operation.
  • the steps may be categorized.
  • the surgical stages thus classified may be configured in a hierarchical structure.
  • a specific surgical procedure may be classified step by step from the lowest hierarchy (level) to the highest hierarchy (level) to form a hierarchical structure.
  • the lowest layer is composed of the smallest units representing the surgical procedure, and may include a minimum operation operation having a meaning as one minimum operation.
  • the computer may recognize a surgical operation that represents one constant motion pattern as a minimum operation motion such as cutting, grabbing, moving, etc., and configure it as the lowest hierarchy (ie, the lowest surgical step).
  • the computer may be configured in a higher layer by grouping at least one minimal surgery operation according to a specific classification criteria (eg, time course, surgical site, camera movement, change of surgical tool, etc.).
  • the minimum operation can be grouped into one higher layer when it has meaning as a specific operation. have.
  • each minimal surgical operation such as clipping, moving, or cutting is performed continuously, it may be recognized that this is a surgical operation for cutting blood vessels and may be grouped into one higher layer.
  • each minimal surgical operation such as grabbing, lifting, cutting, or kicking is performed in succession, it may be recognized that this is a surgical operation to remove fat and grouped into one higher layer.
  • a hierarchical structure can be finally formed up to the highest hierarchy (ie, the highest surgical stage) of a specific surgical procedure.
  • certain surgical procedures may have a hierarchical structure, such as a tree.
  • the computer can estimate the meaning of the surgical image by deriving context information for each image frame, the computer can recognize the specific operation stage to which the image frame belongs among the predefined operation stages for the specific operation.
  • the recognized specific surgery stage may include a hierarchical layer (eg, first level) hierarchically configured for a specific surgery, and a specific layer (eg, second to nth levels) among the highest hierarchy (n level). It can mean any one of the surgical operation) belonging to any one level. That is, the computer may recognize the operation step belonging to a specific layer on the hierarchical structure of the operation process through the context information for each image frame.
  • the computer may estimate an organ candidate group including at least one organ that may be extracted from the surgery step recognized in step S200 (S300).
  • the specific surgery process may be composed of predetermined surgery steps, and thus, various pieces of information (eg, information about a specific surgery operation, a specific surgery part, a specific surgery tool, a camera position, etc.) may be obtained from each predetermined surgery step. Can be extracted.
  • the computer may extract the organ-organ information observable at each operation stage to generate a long-term candidate group for each operation stage. Alternatively, the computer may obtain a long-term candidate group for each surgery stage that is previously generated.
  • the computer may recognize first to nth surgical steps for each image frame.
  • the computer may extract organs that may be observed at the time of surgery from each of the first to n th surgical steps, and may construct an organ candidate group including the extracted organs for each of the surgical steps.
  • the computer recognizes a k-stage operation (eg, left hepatic artery (LHA) incision stage) from a specific surgical image (k-th image frame), and uses the stomach, liver, and spleen as organs observable in the recognized k-stage operation. It can be estimated and composed of long-term candidate groups.
  • LHA left hepatic artery
  • the computer may specify an organ region in a surgical image (that is, at least one image frame) based on information on at least one organ in the organ candidate population estimated in step S300 (S400).
  • the computer may specify an organ region in the surgical image based on at least one of the positional relationship between organs in the organ candidate population estimated from the operation stage for the surgical image and texture information of the organs. Can be. A detailed process thereof will be described with reference to FIG. 5.
  • FIG. 5 is an example illustrating a process of specifying an organ region in a surgical image based on information about an organ in an organ candidate group according to an embodiment of the present invention.
  • the computer may calculate positional information of at least one organ included in a surgical image by learning a positional relationship between organs in an organ candidate group (S410).
  • the computer may construct a neural network (eg, a backbone network) that includes at least one layer (eg, a convolution layer) to learn the positional relationship between organs in the organ candidate population.
  • a neural network eg, a backbone network
  • the computer may input a surgical image (that is, a specific image frame) and an organ candidate group into the backbone network, and determine the positional relationship between organs in the organ candidate group through learning in the first convolutional layer.
  • the computer may calculate location information of at least one organ included in the surgical image, based on a learning result about the positional relationship between organs in the organ candidate group.
  • the position information of the organ may use coordinate values of a specific point (for example, a center point) among regions where organs are distributed in the surgical image.
  • Each organ has its own fixed position inside the body.
  • location information of organs in the organ candidate group can also be obtained.
  • the computer can perform the learning based on fixed locations of organs in the organ candidate population. That is, organs having a fixed position may be photographed as if they exist in different positions according to the field of view (FOV) of the camera, the angle of the camera, the position of the camera, etc., but the positional relationship between the organs is maintained. do.
  • the computer can learn the spatial topological relationship between organs in the organ candidate group.
  • the computer can recognize the organ placement existing in the surgical image by learning the spatial topological relationships between these organs.
  • the computer may calculate the position information of the organs in the surgical image based on the organ placement.
  • the computer may learn texture information about organs in the organ candidate group to calculate texture information of at least one organ included in the surgical image (S420).
  • the computer may include at least one layer (eg, a convolution layer) in a neural network (eg, a backbone network) to learn texture information about organs in the organ candidate population.
  • a convolution layer e.g., a convolution layer
  • the computer may input a surgical image (ie, a specific image frame) and an organ candidate group into the backbone network, and derive texture information of organs in the organ candidate group through learning in the second convolutional layer.
  • the computer may calculate texture information of at least one organ included in the surgical image based on the learning result of the texture information of the organs in the organ candidate group.
  • the computer may specify an organ region in which an organ exists in the surgical image based on the location information of the organs in the surgical image calculated in step S410, and detect an area corresponding to the texture information of the organ calculated in step S420. S430).
  • the computer recognizes the organ at the location based on the location information of the organs present in the surgical image, and in the surgical image having the same or similar texture information using the texture information calculated for the recognized organs
  • the distribution area can be detected. For example, if the computer calculates position information of three organs (eg, A, B, and C organs) in the surgical image through learning, the computer recognizes the A organ at the position of the A organ in the surgical image. The A organ region matching the texture information of the A organ may be detected in the surgical image based on the texture information of the A organ calculated through learning.
  • the computer can detect the B organ region and the C organ region by repeating the same process for the B organ and the C organ. Thus, the computer can finally specify each organ region in which each organ included in the surgical image is present.
  • the computer may display the specific organ region in the surgical image to provide to the medical staff performing the actual surgery in real time.
  • the computer can specify the organ region present in the surgical image through the steps S100 ⁇ S400, so that the specified organ region It can be displayed on the corresponding surgical image output on the screen. Accordingly, it is possible to more effectively provide meaningful information (important long-term information) through the surgical image in the surgical process.
  • the computer can match the organ region specified in the surgical image on the virtual body model.
  • the computer may perform the simulation based on the long-term region matched on the virtual body model.
  • the virtual body model may be three-dimensional modeling data generated based on medical image data (eg, medical images taken through CT, PET, MRI, etc.) previously photographed inside the body of the patient.
  • the model may be modeled in accordance with the body of the surgical subject, and may be corrected to the same state as the actual surgical state.
  • Medical staff can perform rehearsals or simulations using a virtual body model that is implemented in the same way as the physical state of the subject, and can experience the same state as during the actual surgery.
  • virtual surgery data including rehearsal or simulation behavior for the virtual body model can be obtained.
  • the virtual surgery data may be a virtual surgery image including a surgical site on which a virtual surgery is performed on a virtual body model, or may be data recorded on a surgical operation performed on the virtual body model.
  • the computer can obtain the type of organs present in the surgical image, the location information of the organs, the positional relationship between the organs, the texture information of the organs, etc. Match exactly on the model. This matching enables the same situation in the virtual body model that is currently being performed in a real surgery. Accordingly, the computer may perform simulation by applying surgical tools and surgical operations used in the surgical procedure actually performed by the medical staff based on the matched organ region on the virtual body model. In other words, the simulation can be performed in the same way as the current physical surgery is performed through the virtual body model, and it can play a role of assisting medical staff to perform the surgery more accurately and effectively. In addition, during the re-surgery or learning can be reproduced the same surgical process as the actual surgical situation, thereby improving the learning effect of the medical staff.
  • the computer may acquire simulation data through simulation on a virtual body model, and generate cue sheet data on a corresponding surgical procedure based on the obtained simulation data.
  • the computer may acquire simulation data that records a target surgical site, a surgical tool, a surgical operation, and the like by performing a simulation using a virtual body model.
  • the computer may generate cuesheet data about the actual surgical procedure being performed by the medical staff based on the simulation data containing this record.
  • the cue sheet data may be data consisting of information in which the surgical procedures are arranged in order according to time based on the surgical operation performed at the time of surgery.
  • the cue sheet data may include surgical information on a surgical operation that may be performed as a meaningful surgical operation or a minimum unit.
  • the surgical information may include surgical tool information, surgical site information, surgical operation information, and the like.
  • Such cuesheet data can be used to implement the same surgical operations as the actual surgical procedure, through which can be used to evaluate the operation or to be used as a learning model later.
  • FIG. 6 is a view schematically showing the configuration of an apparatus 200 for performing a method for providing surgical information using a surgical image according to an embodiment of the present invention.
  • the processor 210 may include a connection passage (eg, a bus or the like) that transmits and receives a signal with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
  • a connection passage eg, a bus or the like
  • the processor 210 executes one or more instructions stored in the memory 220 to perform a method of providing surgery information using the surgery image described with reference to FIGS. 2 to 5.
  • the processor 210 acquires a surgical image for a specific surgery by executing one or more instructions stored in the memory 220, recognizing a surgery step in the specific surgery corresponding to the surgical image, Estimating an organ candidate group including at least one organ that can be extracted in a surgical step, and specifying an organ region in the surgical image based on a positional relationship between organs in the organ candidate group. have.
  • the processor 210 is a random access memory (RAM) and a ROM (Read-Only Memory) for temporarily and / or permanently storing signals (or data) processed in the processor 210. , Not shown) may be further included.
  • the processor 210 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
  • SoC system on chip
  • the memory 220 may store programs (one or more instructions) for processing and controlling the processor 210. Programs stored in the memory 220 may be divided into a plurality of modules according to their functions.
  • the surgical information providing method using the surgical image according to the embodiment of the present invention described above may be implemented as a program (or an application) to be executed by being combined with a computer which is hardware and stored in a medium.
  • the above-described program includes C, C ++, JAVA, machine language, etc. which can be read by the computer's processor (CPU) through the computer's device interface so that the computer reads the program and executes the methods implemented as the program.
  • Code may be coded in the computer language of. Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedures related to execution procedures necessary for the computer's processor to execute the functions according to a predetermined procedure. can do.
  • the code may further include memory reference code for additional information or media required for the computer's processor to execute the functions at which location (address address) of the computer's internal or external memory should be referenced. have.
  • the code may be used to communicate with any other computer or server remotely using the communication module of the computer. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
  • the stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device.
  • examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the computer can access or various recording media on the computer of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • flash memory hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé permettant d'obtenir des informations chirurgicales à l'aide d'une image chirurgicale. Le procédé comprend les étapes suivantes : acquisition d'une image chirurgicale relevant d'une chirurgie spécifique; reconnaissance d'une étape chirurgicale relevant de la chirurgie spécifique correspondant à l'image chirurgicale; estimation d'un groupe d'organes candidats comprenant au moins un organe qui peut être extrait dans l'étape chirurgicale; et spécification d'une zone d'organe dans l'image chirurgicale en fonction de la relation positionnelle entre les organes dans le groupe d'organes candidats.
PCT/KR2019/002096 2018-02-20 2019-02-20 Procédé et dispositif permettant d'obtenir des informations chirurgicales à l'aide d'une image chirurgicale WO2019164278A1 (fr)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
KR10-2018-0019868 2018-02-20
KR20180019868 2018-02-20
KR10-2018-0019867 2018-02-20
KR20180019867 2018-02-20
KR10-2018-0019866 2018-02-20
KR20180019866 2018-02-20
KR1020180145177A KR20190100011A (ko) 2018-02-20 2018-11-22 수술영상을 이용한 수술정보 제공 방법 및 장치
KR10-2018-0145177 2018-11-22

Publications (1)

Publication Number Publication Date
WO2019164278A1 true WO2019164278A1 (fr) 2019-08-29

Family

ID=67687855

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/002096 WO2019164278A1 (fr) 2018-02-20 2019-02-20 Procédé et dispositif permettant d'obtenir des informations chirurgicales à l'aide d'une image chirurgicale

Country Status (1)

Country Link
WO (1) WO2019164278A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100106834A (ko) * 2009-03-24 2010-10-04 주식회사 이턴 증강현실을 이용한 수술 로봇 시스템 및 그 제어 방법
KR20120126679A (ko) * 2011-05-12 2012-11-21 주식회사 이턴 수술 상황 판단 및 대응을 위한 수술 로봇 시스템의 제어 방법과 이를 기록한 기록매체 및 수술 로봇 시스템
KR101302595B1 (ko) * 2012-07-03 2013-08-30 한국과학기술연구원 수술 진행 단계를 추정하는 시스템 및 방법
KR20160086629A (ko) * 2015-01-12 2016-07-20 한국전자통신연구원 영상유도 수술에서 수술부위와 수술도구 위치정합 방법 및 장치
US20170039709A1 (en) * 2012-03-08 2017-02-09 Olympus Corporation Image processing device, information storage device, and image processing method
KR20180006622A (ko) * 2015-06-09 2018-01-18 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 의료 컨텍스트에서의 비디오 컨텐트 검색

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100106834A (ko) * 2009-03-24 2010-10-04 주식회사 이턴 증강현실을 이용한 수술 로봇 시스템 및 그 제어 방법
KR20120126679A (ko) * 2011-05-12 2012-11-21 주식회사 이턴 수술 상황 판단 및 대응을 위한 수술 로봇 시스템의 제어 방법과 이를 기록한 기록매체 및 수술 로봇 시스템
US20170039709A1 (en) * 2012-03-08 2017-02-09 Olympus Corporation Image processing device, information storage device, and image processing method
KR101302595B1 (ko) * 2012-07-03 2013-08-30 한국과학기술연구원 수술 진행 단계를 추정하는 시스템 및 방법
KR20160086629A (ko) * 2015-01-12 2016-07-20 한국전자통신연구원 영상유도 수술에서 수술부위와 수술도구 위치정합 방법 및 장치
KR20180006622A (ko) * 2015-06-09 2018-01-18 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 의료 컨텍스트에서의 비디오 컨텐트 검색

Similar Documents

Publication Publication Date Title
KR20190100011A (ko) 수술영상을 이용한 수술정보 제공 방법 및 장치
WO2019132168A1 (fr) Système d'apprentissage de données d'images chirurgicales
WO2019132169A1 (fr) Procédé, appareil, et programme de commande de lecture d'image chirurgicale
WO2019132165A1 (fr) Procédé et programme de fourniture de rétroaction sur un résultat chirurgical
WO2021006472A1 (fr) Procédé d'affichage de densité osseuse multiple pour établir un plan de procédure d'implant et dispositif de traitement d'image associé
WO2019132244A1 (fr) Procédé de génération d'informations de simulation chirurgicale et programme
WO2021162355A1 (fr) Procédé et appareil pour fournir des données de guidage pour un dispositif d'insertion d'instrument médical intravasculaire
WO2020032562A2 (fr) Système de diagnostic d'image biologique, procédé de diagnostic d'image biologique et terminal pour l'exécuter
KR102146672B1 (ko) 수술결과에 대한 피드백 제공방법 및 프로그램
WO2021206518A1 (fr) Procédé et système d'analyse d'un intervention chirurgicale après une opération
WO2019164273A1 (fr) Méthode et dispositif de prédiction de temps de chirurgie sur la base d'une image chirurgicale
WO2021054700A1 (fr) Procédé pour fournir des informations de lésion dentaire et dispositif l'utilisant
WO2019164277A1 (fr) Procédé et dispositif d'évaluation de saignement par utilisation d'une image chirurgicale
WO2020159276A1 (fr) Appareil d'analyse chirurgicale et système, procédé et programme pour analyser et reconnaître une image chirurgicale
WO2019164278A1 (fr) Procédé et dispositif permettant d'obtenir des informations chirurgicales à l'aide d'une image chirurgicale
WO2020145455A1 (fr) Système de simulateur d'apprentissage virtuel basé sur la réalité augmentée pour intervention chirurgicale sur le système cardiovasculaire, et procédé associé
WO2023136695A1 (fr) Appareil et procédé pour la génération d'un modèle de poumon virtuel de patient
WO2022119347A1 (fr) Procédé, appareil et support d'enregistrement pour analyser un tissu de plaque d'athérome par apprentissage profond basé sur une image échographique
WO2021206517A1 (fr) Procédé et système de navigation vasculaire peropératoire
WO2022158843A1 (fr) Procédé d'affinage d'image d'échantillon de tissu, et système informatique le mettant en œuvre
WO2022108387A1 (fr) Procédé et dispositif permettant de générer des données de dossier clinique
WO2022181919A1 (fr) Dispositif et procédé pour fournir un environnement d'opération basé sur la réalité virtuelle
WO2018221816A1 (fr) Procédé permettant de déterminer si une personne examinée est infectée par un micro-organisme et appareil utilisant ledit procédé
WO2021149918A1 (fr) Procédé et appareil d'estimation de l'âge d'un os
WO2021015490A2 (fr) Procédé et dispositif de détection d'une zone spécifique d'une image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19757641

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19757641

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1202A DATED 15.02.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19757641

Country of ref document: EP

Kind code of ref document: A1