WO2019164278A1 - Method and device for providing surgical information using surgical image - Google Patents
Method and device for providing surgical information using surgical image Download PDFInfo
- Publication number
- WO2019164278A1 WO2019164278A1 PCT/KR2019/002096 KR2019002096W WO2019164278A1 WO 2019164278 A1 WO2019164278 A1 WO 2019164278A1 KR 2019002096 W KR2019002096 W KR 2019002096W WO 2019164278 A1 WO2019164278 A1 WO 2019164278A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- surgical
- surgery
- organ
- image
- information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 210000000056 organ Anatomy 0.000 claims abstract description 166
- 238000001356 surgical procedure Methods 0.000 claims abstract description 115
- 238000004088 simulation Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 9
- 230000007774 longterm Effects 0.000 description 6
- 238000002324 minimally invasive surgery Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000002357 laparoscopic surgery Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000002059 diagnostic imaging Methods 0.000 description 3
- 238000002432 robotic surgery Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000002673 radiosurgery Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 210000002767 hepatic artery Anatomy 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 238000002406 microsurgery Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 210000000952 spleen Anatomy 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
Definitions
- the present invention relates to a method and apparatus for providing surgical information using a surgical image.
- Laparoscopic surgery refers to surgery performed by medical staff to see and touch the part to be treated.
- Minimally invasive surgery is also known as keyhole surgery, and laparoscopic surgery and robotic surgery are typical.
- laparoscopic surgery a small hole is made in a necessary part without opening, and a laparoscopic with a special camera is attached and a surgical tool is inserted into the body and observed through a video monitor.
- Microsurgery is performed using a laser or a special instrument.
- robot surgery is to perform minimally invasive surgery using a surgical robot.
- radiation surgery refers to surgical treatment with radiation or laser light outside the body.
- the surgical image is obtained during the actual surgery and the surgery is performed based on this. Therefore, it is important to provide the surgical site and various information related thereto through the surgical image acquired during the actual surgery.
- the problem to be solved by the present invention is to provide a method and apparatus for providing surgical information using a surgical image.
- the problem to be solved by the present invention is to provide a method and apparatus for estimating the observable organs in the surgical image by identifying the current surgical stage through the surgical image, and providing information on the estimated organs.
- the problem to be solved by the present invention is to provide a method and apparatus for providing the same simulation environment as the actual progress of the operation through the surgical image.
- the problem to be solved by the present invention is to provide a method and apparatus for providing information of organs in a more significant surgical image by analyzing the surgical image using deep learning.
- Surgical information providing method using a surgical image performed by a computer obtaining a surgical image for a specific operation, recognizing the operation step in the specific surgery corresponding to the surgical image Estimating an organ candidate group including at least one organ that can be extracted in the operation step, and specifying an organ region in the surgical image based on a positional relationship between organs in the organ candidate group. do.
- the step of recognizing the operation step the step of performing the learning based on at least one surgical image of the previous time and the surgical image of the current time point, for the surgical image through the learning Deriving context information, and recognizing a surgery step corresponding to the surgery image among the surgery steps determined for the specific surgery based on the context information on the surgery image.
- the step of recognizing the operation step when the specific surgery comprises a hierarchical structure including the lowest hierarchy to the highest hierarchy, any one operation operation belonging to a specific hierarchy on the hierarchy It may be recognized as a surgical step corresponding to the surgical image.
- specifying the organ region in the surgical image learning the positional relationship between the organs in the organ candidate group, calculating the position information of the organs included in the surgical image, And specifying an organ region in which the organ exists in the surgical image based on the location information of the organ.
- the step of specifying the organ region in the surgical image by learning the texture (texture) information on the organs in the organ candidate population, the texture information of the organs included in the surgical image
- the method may further include detecting an organ region corresponding to the texture information of the organ in the surgical image based on the calculation and the location information of the organ.
- the method may further include displaying the specific organ region on the surgical image and providing the same to the user.
- the virtual body model is The 3D modeling data may be generated based on medical image data photographing the inside of the body of the specific surgery subject.
- the method may further include generating cue sheet data for the specific surgery by obtaining simulation data.
- An apparatus includes a memory for storing one or more instructions, and a processor for executing the one or more instructions stored in the memory, wherein the processor executes the one or more instructions to perform a particular operation.
- a computer program according to an embodiment of the present invention is combined with a computer, which is hardware, and stored in a computer-readable recording medium to perform a method for providing surgical information using the surgical image.
- the present invention it is possible to accurately determine the operation stage in the surgery currently being performed through the surgical image. In addition, it is effective in estimating the observable organs in the surgical image by identifying the current stage of surgery, thereby improving the long-term recognition rate.
- the present invention it is possible to accurately determine the position and shape of the actual organs through the surgical image.
- the location and shape of these organs can be used to provide more meaningful information to the medical staff performing the surgery.
- the present invention by accurately specifying the position and shape of the actual organs through the surgical image it is possible to implement the same simulation environment as the actual operation is in progress.
- medical staff can perform rehearsal or virtual surgery more accurately and effectively, thereby improving the learning effect of medical staff.
- FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
- FIGS. 2 and 3 are flowcharts illustrating a method for providing surgical information using a surgical image according to an embodiment of the present invention.
- FIG. 4 is an example showing a process of recognizing a surgical step based on a surgical image according to an embodiment of the present invention.
- FIG. 5 is an example illustrating a process of specifying an organ region in a surgical image based on information about an organ in an organ candidate group according to an embodiment of the present invention.
- FIG. 6 is a view schematically showing the configuration of an apparatus 200 for performing a method for providing surgical information using a surgical image according to an embodiment of the present invention.
- a “part” or “module” refers to a hardware component such as software, FPGA, or ASIC, and the “part” or “module” plays certain roles. However, “part” or “module” is not meant to be limited to software or hardware.
- the “unit” or “module” may be configured to be in an addressable storage medium or may be configured to play one or more processors.
- a “part” or “module” may include components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, Procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Functions provided within components and “parts” or “modules” may be combined into smaller numbers of components and “parts” or “modules” or into additional components and “parts” or “modules”. Can be further separated.
- a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
- a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
- a head mounted display (HMD) device includes a computing function
- the HMD device may be a computer.
- the computer may correspond to a server that receives a request from a client and performs information processing.
- FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
- the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
- the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
- surgical robot 34 includes imaging device 36 and surgical instrument 38.
- the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
- the server 100 is a computing device including at least one processor and a communication unit.
- the controller 30 includes a computing device including at least one processor and a communication unit.
- the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
- the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
- the image photographed by the photographing apparatus 36 is displayed on the display 340.
- surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
- Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
- the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
- the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
- the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
- the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
- the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
- the present invention is to provide more meaningful information to the medical staff undergoing surgery by using the surgical image or the surgical data that can be obtained in the surgical process.
- Computer may mean the server 100 or the controller 30 of FIG. 1, but is not limited thereto and may be used to encompass a device capable of performing computing processing.
- the computer may be a computing device provided separately from the device shown in FIG. 1.
- embodiments disclosed below may not be applicable only in connection with the robotic surgery system illustrated in FIG. 1, but may be applied to all kinds of embodiments that may acquire and utilize a surgical image in a surgical procedure.
- FIGS. 2 and 3 are flowcharts illustrating a method for providing surgical information using a surgical image according to an embodiment of the present invention.
- obtaining a surgical image for a specific operation (S100)
- the specific surgery corresponding to the surgical image Recognizing a surgical step in (S200)
- estimating the organ candidate group including at least one organ that can be extracted in the surgical step (S300)
- based on the positional relationship between organs in the organ candidate group And specifying an organ region in the surgical image (S400).
- the computer may acquire a surgical image for a specific surgery (S100).
- the medical staff may perform the actual surgery on the subject directly, or perform minimally invasive surgery using a laparoscopic or endoscope as well as the surgical robot described in FIG. 1. .
- the computer may acquire a surgical image photographing a scene including a surgical operation performed in the surgical procedure, a surgical tool related thereto, a surgical site, and the like.
- the computer may acquire a surgical image photographing a scene including a surgical site and a surgical tool that is currently undergoing surgery from a camera that enters the body of the subject.
- the surgical image may include one or more image frames.
- Each image frame may represent a scene in which a surgical operation is performed, including a surgical site, a surgical tool, and the like of a surgical subject.
- the surgical image may be composed of image frames in which a surgical operation is recorded for each scene (scene) according to time during the surgical procedure.
- the surgical image may be composed of image frames that record each surgical scene according to the spatial movement such as the surgical site or the position of the camera during the surgery.
- the computer may recognize a surgical workflow stage in a specific surgical process corresponding to the surgical image acquired in step S100 (S200).
- the computer may perform learning using deep learning on a surgical image (eg, at least one image frame) acquired during a specific surgical procedure, and the context information of the surgical image through the learning. Can be derived.
- the computer may recognize a surgery stage corresponding to the surgery image on a specific surgery procedure including at least one surgery stage based on the context information of the surgery image. A detailed process thereof will be described with reference to FIG. 4.
- FIG. 4 is an example showing a process of recognizing a surgical step based on a surgical image according to an embodiment of the present invention.
- the computer may learn a surgical image (ie, at least one image frame) acquired at a specific surgery.
- the computer may perform learning using a convolutional neural network (CNN) for each image frame included in the surgical image (S210). For example, the computer may learn characteristics of the surgical image by inputting each image frame to at least one layer (eg, a convolution layer). As a result of this learning, the computer can infer what each image frame represents or represents.
- CNN convolutional neural network
- the computer may perform learning using a recurrent neural network (RNN) on the surgical image (that is, at least one image frame) derived as a result of learning using the CNN (S220).
- the computer may receive the surgical image in units of frames and learn what the surgical image in the frame means by using an RNN (eg, an LSTM method).
- RNN eg, an LSTM method
- the computer receives a surgical image (eg, the first image frame) of the current view and at least one surgical image (eg, the second to nth image frames) of the previous view and uses the RNN (eg, LSTM). Learning may be performed and context information of the surgical image (ie, the first image frame of the current view) of the current view may be derived as the learning result.
- the context information is information indicating what the surgical image means, and may include information related to a specific surgical operation in the surgical procedure.
- the computer may recognize the meaning (ie, context information) of the surgery image based on the learning result of the surgery image, and recognize the surgery stage corresponding to the surgery image.
- the computer may recognize the operation step for each image frame based on the context information for each image frame derived as a learning result of the RNN (S230).
- the specific surgical process may be composed of at least one surgical step.
- at least one surgical step may be predefined for each particular surgery.
- each operation step may be classified according to the time course of a specific operation, or each operation step may be classified to correspond to each operation part based on the operation site.
- each operation stage may be classified based on the position of the camera or the movement range of the camera during a specific operation, or each operation based on a change (eg, replacement) of a surgical tool during a specific operation.
- the steps may be categorized.
- the surgical stages thus classified may be configured in a hierarchical structure.
- a specific surgical procedure may be classified step by step from the lowest hierarchy (level) to the highest hierarchy (level) to form a hierarchical structure.
- the lowest layer is composed of the smallest units representing the surgical procedure, and may include a minimum operation operation having a meaning as one minimum operation.
- the computer may recognize a surgical operation that represents one constant motion pattern as a minimum operation motion such as cutting, grabbing, moving, etc., and configure it as the lowest hierarchy (ie, the lowest surgical step).
- the computer may be configured in a higher layer by grouping at least one minimal surgery operation according to a specific classification criteria (eg, time course, surgical site, camera movement, change of surgical tool, etc.).
- the minimum operation can be grouped into one higher layer when it has meaning as a specific operation. have.
- each minimal surgical operation such as clipping, moving, or cutting is performed continuously, it may be recognized that this is a surgical operation for cutting blood vessels and may be grouped into one higher layer.
- each minimal surgical operation such as grabbing, lifting, cutting, or kicking is performed in succession, it may be recognized that this is a surgical operation to remove fat and grouped into one higher layer.
- a hierarchical structure can be finally formed up to the highest hierarchy (ie, the highest surgical stage) of a specific surgical procedure.
- certain surgical procedures may have a hierarchical structure, such as a tree.
- the computer can estimate the meaning of the surgical image by deriving context information for each image frame, the computer can recognize the specific operation stage to which the image frame belongs among the predefined operation stages for the specific operation.
- the recognized specific surgery stage may include a hierarchical layer (eg, first level) hierarchically configured for a specific surgery, and a specific layer (eg, second to nth levels) among the highest hierarchy (n level). It can mean any one of the surgical operation) belonging to any one level. That is, the computer may recognize the operation step belonging to a specific layer on the hierarchical structure of the operation process through the context information for each image frame.
- the computer may estimate an organ candidate group including at least one organ that may be extracted from the surgery step recognized in step S200 (S300).
- the specific surgery process may be composed of predetermined surgery steps, and thus, various pieces of information (eg, information about a specific surgery operation, a specific surgery part, a specific surgery tool, a camera position, etc.) may be obtained from each predetermined surgery step. Can be extracted.
- the computer may extract the organ-organ information observable at each operation stage to generate a long-term candidate group for each operation stage. Alternatively, the computer may obtain a long-term candidate group for each surgery stage that is previously generated.
- the computer may recognize first to nth surgical steps for each image frame.
- the computer may extract organs that may be observed at the time of surgery from each of the first to n th surgical steps, and may construct an organ candidate group including the extracted organs for each of the surgical steps.
- the computer recognizes a k-stage operation (eg, left hepatic artery (LHA) incision stage) from a specific surgical image (k-th image frame), and uses the stomach, liver, and spleen as organs observable in the recognized k-stage operation. It can be estimated and composed of long-term candidate groups.
- LHA left hepatic artery
- the computer may specify an organ region in a surgical image (that is, at least one image frame) based on information on at least one organ in the organ candidate population estimated in step S300 (S400).
- the computer may specify an organ region in the surgical image based on at least one of the positional relationship between organs in the organ candidate population estimated from the operation stage for the surgical image and texture information of the organs. Can be. A detailed process thereof will be described with reference to FIG. 5.
- FIG. 5 is an example illustrating a process of specifying an organ region in a surgical image based on information about an organ in an organ candidate group according to an embodiment of the present invention.
- the computer may calculate positional information of at least one organ included in a surgical image by learning a positional relationship between organs in an organ candidate group (S410).
- the computer may construct a neural network (eg, a backbone network) that includes at least one layer (eg, a convolution layer) to learn the positional relationship between organs in the organ candidate population.
- a neural network eg, a backbone network
- the computer may input a surgical image (that is, a specific image frame) and an organ candidate group into the backbone network, and determine the positional relationship between organs in the organ candidate group through learning in the first convolutional layer.
- the computer may calculate location information of at least one organ included in the surgical image, based on a learning result about the positional relationship between organs in the organ candidate group.
- the position information of the organ may use coordinate values of a specific point (for example, a center point) among regions where organs are distributed in the surgical image.
- Each organ has its own fixed position inside the body.
- location information of organs in the organ candidate group can also be obtained.
- the computer can perform the learning based on fixed locations of organs in the organ candidate population. That is, organs having a fixed position may be photographed as if they exist in different positions according to the field of view (FOV) of the camera, the angle of the camera, the position of the camera, etc., but the positional relationship between the organs is maintained. do.
- the computer can learn the spatial topological relationship between organs in the organ candidate group.
- the computer can recognize the organ placement existing in the surgical image by learning the spatial topological relationships between these organs.
- the computer may calculate the position information of the organs in the surgical image based on the organ placement.
- the computer may learn texture information about organs in the organ candidate group to calculate texture information of at least one organ included in the surgical image (S420).
- the computer may include at least one layer (eg, a convolution layer) in a neural network (eg, a backbone network) to learn texture information about organs in the organ candidate population.
- a convolution layer e.g., a convolution layer
- the computer may input a surgical image (ie, a specific image frame) and an organ candidate group into the backbone network, and derive texture information of organs in the organ candidate group through learning in the second convolutional layer.
- the computer may calculate texture information of at least one organ included in the surgical image based on the learning result of the texture information of the organs in the organ candidate group.
- the computer may specify an organ region in which an organ exists in the surgical image based on the location information of the organs in the surgical image calculated in step S410, and detect an area corresponding to the texture information of the organ calculated in step S420. S430).
- the computer recognizes the organ at the location based on the location information of the organs present in the surgical image, and in the surgical image having the same or similar texture information using the texture information calculated for the recognized organs
- the distribution area can be detected. For example, if the computer calculates position information of three organs (eg, A, B, and C organs) in the surgical image through learning, the computer recognizes the A organ at the position of the A organ in the surgical image. The A organ region matching the texture information of the A organ may be detected in the surgical image based on the texture information of the A organ calculated through learning.
- the computer can detect the B organ region and the C organ region by repeating the same process for the B organ and the C organ. Thus, the computer can finally specify each organ region in which each organ included in the surgical image is present.
- the computer may display the specific organ region in the surgical image to provide to the medical staff performing the actual surgery in real time.
- the computer can specify the organ region present in the surgical image through the steps S100 ⁇ S400, so that the specified organ region It can be displayed on the corresponding surgical image output on the screen. Accordingly, it is possible to more effectively provide meaningful information (important long-term information) through the surgical image in the surgical process.
- the computer can match the organ region specified in the surgical image on the virtual body model.
- the computer may perform the simulation based on the long-term region matched on the virtual body model.
- the virtual body model may be three-dimensional modeling data generated based on medical image data (eg, medical images taken through CT, PET, MRI, etc.) previously photographed inside the body of the patient.
- the model may be modeled in accordance with the body of the surgical subject, and may be corrected to the same state as the actual surgical state.
- Medical staff can perform rehearsals or simulations using a virtual body model that is implemented in the same way as the physical state of the subject, and can experience the same state as during the actual surgery.
- virtual surgery data including rehearsal or simulation behavior for the virtual body model can be obtained.
- the virtual surgery data may be a virtual surgery image including a surgical site on which a virtual surgery is performed on a virtual body model, or may be data recorded on a surgical operation performed on the virtual body model.
- the computer can obtain the type of organs present in the surgical image, the location information of the organs, the positional relationship between the organs, the texture information of the organs, etc. Match exactly on the model. This matching enables the same situation in the virtual body model that is currently being performed in a real surgery. Accordingly, the computer may perform simulation by applying surgical tools and surgical operations used in the surgical procedure actually performed by the medical staff based on the matched organ region on the virtual body model. In other words, the simulation can be performed in the same way as the current physical surgery is performed through the virtual body model, and it can play a role of assisting medical staff to perform the surgery more accurately and effectively. In addition, during the re-surgery or learning can be reproduced the same surgical process as the actual surgical situation, thereby improving the learning effect of the medical staff.
- the computer may acquire simulation data through simulation on a virtual body model, and generate cue sheet data on a corresponding surgical procedure based on the obtained simulation data.
- the computer may acquire simulation data that records a target surgical site, a surgical tool, a surgical operation, and the like by performing a simulation using a virtual body model.
- the computer may generate cuesheet data about the actual surgical procedure being performed by the medical staff based on the simulation data containing this record.
- the cue sheet data may be data consisting of information in which the surgical procedures are arranged in order according to time based on the surgical operation performed at the time of surgery.
- the cue sheet data may include surgical information on a surgical operation that may be performed as a meaningful surgical operation or a minimum unit.
- the surgical information may include surgical tool information, surgical site information, surgical operation information, and the like.
- Such cuesheet data can be used to implement the same surgical operations as the actual surgical procedure, through which can be used to evaluate the operation or to be used as a learning model later.
- FIG. 6 is a view schematically showing the configuration of an apparatus 200 for performing a method for providing surgical information using a surgical image according to an embodiment of the present invention.
- the processor 210 may include a connection passage (eg, a bus or the like) that transmits and receives a signal with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
- a connection passage eg, a bus or the like
- the processor 210 executes one or more instructions stored in the memory 220 to perform a method of providing surgery information using the surgery image described with reference to FIGS. 2 to 5.
- the processor 210 acquires a surgical image for a specific surgery by executing one or more instructions stored in the memory 220, recognizing a surgery step in the specific surgery corresponding to the surgical image, Estimating an organ candidate group including at least one organ that can be extracted in a surgical step, and specifying an organ region in the surgical image based on a positional relationship between organs in the organ candidate group. have.
- the processor 210 is a random access memory (RAM) and a ROM (Read-Only Memory) for temporarily and / or permanently storing signals (or data) processed in the processor 210. , Not shown) may be further included.
- the processor 210 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
- SoC system on chip
- the memory 220 may store programs (one or more instructions) for processing and controlling the processor 210. Programs stored in the memory 220 may be divided into a plurality of modules according to their functions.
- the surgical information providing method using the surgical image according to the embodiment of the present invention described above may be implemented as a program (or an application) to be executed by being combined with a computer which is hardware and stored in a medium.
- the above-described program includes C, C ++, JAVA, machine language, etc. which can be read by the computer's processor (CPU) through the computer's device interface so that the computer reads the program and executes the methods implemented as the program.
- Code may be coded in the computer language of. Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedures related to execution procedures necessary for the computer's processor to execute the functions according to a predetermined procedure. can do.
- the code may further include memory reference code for additional information or media required for the computer's processor to execute the functions at which location (address address) of the computer's internal or external memory should be referenced. have.
- the code may be used to communicate with any other computer or server remotely using the communication module of the computer. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
- the stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device.
- examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the computer can access or various recording media on the computer of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable ROM
- EEPROM electrically erasable programmable ROM
- flash memory hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Robotics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 컴퓨터가 수행하는 수술영상을 이용한 수술정보 제공 방법에 있어서,In the surgical information providing method using a surgical image performed by a computer,특정 수술에 대한 수술영상을 획득하는 단계;Obtaining a surgical image for a specific surgery;상기 수술영상에 대응하는 상기 특정 수술에서의 수술단계를 인식하는 단계;Recognizing a surgery step in the specific surgery corresponding to the surgery image;상기 수술단계에서 추출될 수 있는 적어도 하나의 장기를 포함하는 장기 후보 집단을 추정하는 단계; 및Estimating an organ candidate group including at least one organ that can be extracted in the surgery step; And상기 장기 후보 집단 내 장기 간의 위치 관계를 기초로 상기 수술영상 내에서 장기 영역을 특정하는 단계를 포함하는 것을 특징으로 하는 수술영상을 이용한 수술정보 제공 방법. And specifying an organ region in the surgical image based on the positional relationship between the organs in the organ candidate group.
- 제1항에 있어서,The method of claim 1,상기 수술단계를 인식하는 단계는,Recognizing the operation step,이전 시점의 적어도 하나의 수술영상 및 현재 시점의 수술영상을 기초로 학습을 수행하는 단계;Performing learning based on at least one surgical image of a previous time and a surgical image of a current time;상기 학습을 통해 상기 수술영상에 대한 콘텍스트(context) 정보를 도출하는 단계; 및Deriving context information on the surgical image through the learning; And상기 수술영상에 대한 콘텍스트 정보를 기초로 상기 특정 수술에 대해 정해진 수술단계들 중 상기 수술영상에 대응하는 수술단계를 인식하는 단계를 포함하는 것을 특징으로 하는 수술영상을 이용한 수술정보 제공 방법. And recognizing a surgery stage corresponding to the surgery image among the surgery stages determined for the specific surgery based on the context information on the surgery image.
- 제2항에 있어서,The method of claim 2,상기 수술단계를 인식하는 단계는,Recognizing the operation step,상기 특정 수술이 최하위 계층부터 최상위 계층까지 포함하는 계층 구조로 이루어진 경우, 상기 계층 구조 상에서 특정 계층에 속하는 어느 하나의 수술동작을 상기 수술영상에 대응하는 수술단계로 인식하는 것을 특징으로 하는 수술영상을 이용한 수술정보 제공 방법.When the specific surgery has a hierarchical structure including the lowest hierarchy to the highest hierarchy, a surgical image characterized in that any one operation operation belonging to a specific hierarchy on the hierarchy is recognized as a surgical step corresponding to the surgical image. Surgical information provided method.
- 제1항에 있어서,The method of claim 1,상기 수술영상 내에서 장기 영역을 특정하는 단계는,The step of specifying the organ region in the surgical image,상기 장기 후보 집단 내 장기 간의 위치 관계를 학습하여, 상기 수술영상 내 포함된 장기의 위치 정보를 산출하는 단계; 및Calculating positional information of organs included in the surgical image by learning a positional relationship between organs in the organ candidate group; And상기 장기의 위치 정보를 기초로 상기 수술영상 내에서 상기 장기가 존재하는 장기 영역을 특정하는 단계를 포함하는 것을 특징으로 하는 수술영상을 이용한 수술정보 제공 방법. And specifying an organ region in which the organ exists in the surgical image based on the location information of the organ.
- 제4항에 있어서,The method of claim 4, wherein상기 수술영상 내에서 장기 영역을 특정하는 단계는,The step of specifying the organ region in the surgical image,상기 장기 후보 집단 내 장기에 대한 텍스처(texture) 정보를 학습하여, 상기 수술영상 내 포함된 장기의 텍스처 정보를 산출하는 단계; 및Calculating texture information of organs included in the surgical image by learning texture information of organs in the organ candidate population; And상기 장기의 위치 정보를 기초로 상기 수술영상 내에서 상기 장기의 텍스처 정보에 상응하는 장기 영역을 검출하는 단계를 더 포함하는 것을 특징으로 하는 수술영상을 이용한 수술정보 제공 방법. And detecting an organ region corresponding to texture information of the organ in the surgical image based on the location information of the organ.
- 제1항에 있어서,The method of claim 1,상기 수술영상에 상기 특정된 장기 영역을 표시하여 사용자에게 제공하는 단계를 더 포함하는 것을 특징으로 하는 수술영상을 이용한 수술정보 제공 방법. Surgical information providing method using a surgical image further comprising the step of providing to the user by displaying the specified organ region on the surgical image.
- 제1항에 있어서,The method of claim 1,가상신체모델 상에서 상기 특정된 장기 영역을 매칭하는 단계; 및Matching the specified organ region on a virtual body model; And상기 가상신체모델 상에서 매칭된 상기 장기 영역을 기초로 시뮬레이션하는 단계를 더 포함하며,Simulating based on the matched organ region on the virtual body model,상기 가상신체모델은,The virtual body model,상기 특정 수술 대상자의 신체내부를 촬영한 의료영상데이터를 기반으로 생성된 3D 모델링 데이터인 것을 특징으로 하는 수술영상을 이용한 수술정보 제공 방법. Surgical information providing method using the surgical image, characterized in that the 3D modeling data generated based on the medical image data taken inside the body of the specific surgery target.
- 제7항에 있어서,The method of claim 7, wherein상기 시뮬레이션하는 단계는,The simulating step,상기 가상신체모델 상에서 매칭된 상기 장기 영역을 기초로 상기 특정 수술시에 사용되는 수술도구 및 수술동작을 적용하여 시뮬레이션하되,Based on the matched organ region on the virtual body model is simulated by applying the surgical instruments and surgical operations used during the specific surgery,상기 시뮬레이션을 통해 시뮬레이션 데이터를 획득하여 상기 특정 수술에 대한 큐시트데이터를 생성하는 단계를 더 포함하는 것을 특징으로 하는 수술영상을 이용한 수술정보 제공 방법. Surgical information providing method using the surgical image further comprises the step of obtaining the simulation data through the simulation to generate the cue sheet data for the specific surgery.
- 하나 이상의 인스트럭션을 저장하는 메모리; 및Memory for storing one or more instructions; And상기 메모리에 저장된 상기 하나 이상의 인스트럭션을 실행하는 프로세서를 포함하며,A processor for executing the one or more instructions stored in the memory,상기 프로세서는 상기 하나 이상의 인스트럭션을 실행함으로써,The processor executes the one or more instructions,특정 수술에 대한 수술영상을 획득하는 단계;Obtaining a surgical image for a specific surgery;상기 수술영상에 대응하는 상기 특정 수술에서의 수술단계를 인식하는 단계;Recognizing a surgery step in the specific surgery corresponding to the surgery image;상기 수술단계에서 추출될 수 있는 적어도 하나의 장기를 포함하는 장기 후보 집단을 추정하는 단계; 및Estimating an organ candidate group including at least one organ that can be extracted in the surgery step; And상기 장기 후보 집단 내 장기 간의 위치 관계를 기초로 상기 수술영상 내에서 장기 영역을 특정하는 단계를 수행하는 것을 특징으로 하는 장치.And specifying an organ region in the surgical image based on a positional relationship between organs in the organ candidate population.
- 하드웨어인 컴퓨터와 결합되어, 제1항의 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터프로그램.A computer program, coupled to a computer, which is hardware, stored on a recording medium readable by a computer so as to perform the method of claim 1.
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20180019867 | 2018-02-20 | ||
KR20180019866 | 2018-02-20 | ||
KR20180019868 | 2018-02-20 | ||
KR10-2018-0019866 | 2018-02-20 | ||
KR10-2018-0019867 | 2018-02-20 | ||
KR10-2018-0019868 | 2018-02-20 | ||
KR10-2018-0145177 | 2018-11-22 | ||
KR1020180145177A KR20190100011A (en) | 2018-02-20 | 2018-11-22 | Method and apparatus for providing surgical information using surgical video |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019164278A1 true WO2019164278A1 (en) | 2019-08-29 |
Family
ID=67687855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/002096 WO2019164278A1 (en) | 2018-02-20 | 2019-02-20 | Method and device for providing surgical information using surgical image |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019164278A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100106834A (en) * | 2009-03-24 | 2010-10-04 | 주식회사 이턴 | Surgical robot system using augmented reality and control method thereof |
KR20120126679A (en) * | 2011-05-12 | 2012-11-21 | 주식회사 이턴 | Control method of surgical robot system, recording medium thereof, and surgical robot system |
KR101302595B1 (en) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | System and method for predict to surgery progress step |
KR20160086629A (en) * | 2015-01-12 | 2016-07-20 | 한국전자통신연구원 | Method and Apparatus for Coordinating Position of Surgery Region and Surgical Tool During Image Guided Surgery |
US20170039709A1 (en) * | 2012-03-08 | 2017-02-09 | Olympus Corporation | Image processing device, information storage device, and image processing method |
KR20180006622A (en) * | 2015-06-09 | 2018-01-18 | 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 | Search for video content in a medical context |
-
2019
- 2019-02-20 WO PCT/KR2019/002096 patent/WO2019164278A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100106834A (en) * | 2009-03-24 | 2010-10-04 | 주식회사 이턴 | Surgical robot system using augmented reality and control method thereof |
KR20120126679A (en) * | 2011-05-12 | 2012-11-21 | 주식회사 이턴 | Control method of surgical robot system, recording medium thereof, and surgical robot system |
US20170039709A1 (en) * | 2012-03-08 | 2017-02-09 | Olympus Corporation | Image processing device, information storage device, and image processing method |
KR101302595B1 (en) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | System and method for predict to surgery progress step |
KR20160086629A (en) * | 2015-01-12 | 2016-07-20 | 한국전자통신연구원 | Method and Apparatus for Coordinating Position of Surgery Region and Surgical Tool During Image Guided Surgery |
KR20180006622A (en) * | 2015-06-09 | 2018-01-18 | 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 | Search for video content in a medical context |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR20190100011A (en) | Method and apparatus for providing surgical information using surgical video | |
WO2019132168A1 (en) | System for learning surgical image data | |
WO2019132169A1 (en) | Method, apparatus, and program for surgical image playback control | |
KR102146672B1 (en) | Program and method for providing feedback about result of surgery | |
WO2019132244A1 (en) | Method for generating surgical simulation information and program | |
WO2021006472A1 (en) | Multiple bone density displaying method for establishing implant procedure plan, and image processing device therefor | |
WO2021206517A1 (en) | Intraoperative vascular navigation method and system | |
WO2021162355A1 (en) | Method and apparatus for providing guide data for intravascular medical tool insertion device | |
WO2020032562A2 (en) | Bioimage diagnosis system, bioimage diagnosis method, and terminal for executing same | |
WO2019164277A1 (en) | Method and device for evaluating bleeding by using surgical image | |
WO2021206518A1 (en) | Method and system for analyzing surgical procedure after surgery | |
WO2019164273A1 (en) | Method and device for predicting surgery time on basis of surgery image | |
WO2021054700A1 (en) | Method for providing tooth lesion information, and device using same | |
WO2018221816A1 (en) | Method for determining whether examinee is infected by microorganism and apparatus using the same | |
WO2020159276A1 (en) | Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image | |
WO2019164278A1 (en) | Method and device for providing surgical information using surgical image | |
WO2020145455A1 (en) | Augmented reality-based virtual training simulator system for cardiovascular system surgical procedure, and method therefor | |
WO2023136695A1 (en) | Apparatus and method for generating virtual lung model of patient | |
WO2022119347A1 (en) | Method, apparatus, and recording medium for analyzing coronary plaque tissue through ultrasound image-based deep learning | |
Hosp et al. | States of confusion: Eye and head tracking reveal surgeons’ confusion during arthroscopic surgery | |
WO2022108387A1 (en) | Method and device for generating clinical record data | |
WO2022181919A1 (en) | Device and method for providing virtual reality-based operation environment | |
WO2021149918A1 (en) | Bone age estimation method and apparatus | |
WO2021015490A2 (en) | Method and device for analyzing specific area of image | |
WO2021162181A1 (en) | Method and apparatus for training machine learning model for determining operation of medical tool control device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19757641 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19757641 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1202A DATED 15.02.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19757641 Country of ref document: EP Kind code of ref document: A1 |