WO2019164273A1 - Method and device for predicting surgery time on basis of surgery image - Google Patents
Method and device for predicting surgery time on basis of surgery image Download PDFInfo
- Publication number
- WO2019164273A1 WO2019164273A1 PCT/KR2019/002091 KR2019002091W WO2019164273A1 WO 2019164273 A1 WO2019164273 A1 WO 2019164273A1 KR 2019002091 W KR2019002091 W KR 2019002091W WO 2019164273 A1 WO2019164273 A1 WO 2019164273A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- surgery
- image
- surgical
- time
- learning
- Prior art date
Links
- 238000001356 surgical procedure Methods 0.000 title claims abstract description 233
- 238000000034 method Methods 0.000 title claims abstract description 73
- 230000008569 process Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 10
- 239000000284 extract Substances 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000002324 minimally invasive surgery Methods 0.000 description 6
- 238000002357 laparoscopic surgery Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000002432 robotic surgery Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 206010002091 Anaesthesia Diseases 0.000 description 3
- 208000005718 Stomach Neoplasms Diseases 0.000 description 3
- 230000037005 anaesthesia Effects 0.000 description 3
- 238000002059 diagnostic imaging Methods 0.000 description 3
- 206010017758 gastric cancer Diseases 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 201000011549 stomach cancer Diseases 0.000 description 3
- 206010009944 Colon cancer Diseases 0.000 description 2
- 208000001333 Colorectal Neoplasms Diseases 0.000 description 2
- 230000003444 anaesthetic effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000002250 progressing effect Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000002673 radiosurgery Methods 0.000 description 2
- 230000004083 survival effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000002674 endoscopic surgery Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000002406 microsurgery Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
Definitions
- the present invention relates to a method and apparatus for predicting a surgery time based on a surgical image.
- Laparoscopic surgery refers to surgery performed by medical staff to see and touch the part to be treated.
- Minimally invasive surgery is also known as keyhole surgery, and laparoscopic surgery and robotic surgery are typical.
- laparoscopic surgery a small hole is made in a necessary part without opening, and a laparoscopic with a special camera is attached and a surgical tool is inserted into the body and observed through a video monitor.
- Microsurgery is performed using a laser or a special instrument.
- robot surgery is to perform minimally invasive surgery using a surgical robot.
- radiation surgery refers to surgical treatment with radiation or laser light outside the body.
- the problem to be solved by the present invention is to provide a method and apparatus for predicting the operation time based on the surgical image.
- the problem to be solved by the present invention is to provide a method and apparatus for predicting the operation time for each surgical step by performing the learning based on the surgical image.
- the problem to be solved by the present invention is to provide a method and apparatus for estimating the remaining surgery time required to perform the remaining process remaining in the current surgical stage through the surgical image obtained in real time.
- the problem to be solved by the present invention is to provide a method and apparatus for generating a variety of learning data to be used to predict the operation time based on the surgical image obtained through the actual operation of the patient.
- a method for predicting a surgery time based on a surgery image performed by a computer includes: obtaining a preset surgery image including a surgery operation for a specific surgery stage, the preset surgery image, and Generating learning data using the surgery time obtained based on the predetermined surgery image, and performing the learning based on the learning data to predict the surgery time in the specific surgery stage.
- the generating of the learning data may include obtaining a first surgery time based on the preset surgery image, and removing a predetermined section from the preset surgery image. (censored) generating a surgical image, acquiring a second surgery time based on the intermediate cut surgery image, and the preset surgery image based on the first surgery time and the second surgery time And generating at least one of the intermediately cut surgical images as the learning data.
- the step of performing the learning, when the predetermined surgical image is generated as the learning data, based on each image frame and the first surgery time in the predetermined surgical image Learning can be done.
- the step of obtaining the actual surgery image by performing the specific surgery step in the actual surgery process further comprising the step of predicting the operation time, the image of the current time point in the actual surgery image Acquiring an operation time required up to the present time required to perform the specific operation step based on a frame, and an average operation time required for the specific operation step predicted through the learning and an operation time required up to the current time point
- the method may include estimating the remaining surgery time required to perform the specific surgery step after the current time based on time.
- the obtaining of the predetermined surgical image it is possible to obtain a predetermined surgical image performing the specific surgical step, respectively from a plurality of patients.
- the specific surgical step may be any one of the surgical steps belonging to a specific hierarchy in the surgical process consisting of a hierarchical structure according to the operation.
- An apparatus includes a memory for storing one or more instructions, and a processor for executing the one or more instructions stored in the memory, wherein the processor executes the one or more instructions to perform a specific surgical step. Acquiring a predetermined surgical image including a surgical operation for the step, generating learning data using a surgery time required based on the predetermined surgical image and the predetermined surgical image, and the learning data Learning is performed based on the step of predicting the operation time in the specific surgery stage.
- a computer program according to an embodiment of the present invention is combined with a computer, which is hardware, and stored in a computer-readable recording medium to perform a method of predicting an operation time based on the surgery image.
- the present invention it is possible to predict the operation time and the operation remaining time for each operation stage based on the surgical image. Providing the estimated time required for surgery and the time remaining for the operation to the medical staff, it is possible to more accurately understand the current situation of the operation is progressing through it, it is possible to proceed efficiently with the remaining surgery.
- the present invention it is possible to provide the time required for surgery and the operation remaining time in real time, through which an anesthetic dosage can be calculated. In addition, it is effective to determine the appropriate additional anesthesia dose according to the remaining surgery time.
- the present invention it is possible to generate a variety of learning data on the basis of the surgical image obtained through the actual surgical process of the patient, through which the learning for the prediction of the surgery time can be performed more effectively.
- the learning using a variety of learning data it is possible to derive the correct prediction operation time for each surgical step.
- FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
- FIG. 2 is a flowchart schematically illustrating a method for predicting a surgery time based on a surgery image according to an exemplary embodiment of the present invention.
- FIG. 3 is a view for explaining a process of generating learning data based on a surgical image according to an embodiment of the present invention.
- FIG. 4 is a diagram illustrating a process of performing learning based on learning data according to an embodiment of the present invention.
- FIG. 5 is a view schematically showing the configuration of an apparatus 200 for performing a method for predicting a surgery time based on a surgery image according to an embodiment of the present invention.
- a “part” or “module” refers to a hardware component such as software, FPGA, or ASIC, and the “part” or “module” plays certain roles. However, “part” or “module” is not meant to be limited to software or hardware.
- the “unit” or “module” may be configured to be in an addressable storage medium or may be configured to play one or more processors.
- a “part” or “module” may include components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, Procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Functions provided within components and “parts” or “modules” may be combined into smaller numbers of components and “parts” or “modules” or into additional components and “parts” or “modules”. Can be further separated.
- a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
- a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
- a head mounted display (HMD) device includes a computing function
- the HMD device may be a computer.
- the computer may correspond to a server that receives a request from a client and performs information processing.
- FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
- the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
- the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
- surgical robot 34 includes imaging device 36 and surgical instrument 38.
- the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
- the server 100 is a computing device including at least one processor and a communication unit.
- the controller 30 includes a computing device including at least one processor and a communication unit.
- the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
- the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
- the image photographed by the photographing apparatus 36 is displayed on the display 340.
- surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
- Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
- the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
- the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
- the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
- the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
- the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
- the present invention is to perform the learning using the surgical image or the surgical data that can be obtained in the surgical process as a learning material, and to provide a method that can be utilized in the surgical process of other patients through such learning.
- Computer performs a method of predicting a surgery time based on a surgery image according to an embodiment disclosed herein.
- Computer may mean the server 100 or the controller 30 of FIG. 1, but is not limited thereto and may be used to encompass a device capable of performing computing processing.
- the computer may be a computing device provided separately from the device shown in FIG. 1.
- the embodiments disclosed below may not be applicable only in connection with the robotic surgery system illustrated in FIG. 1, but may be applied to all kinds of embodiments that may acquire and utilize a surgical image in a surgical procedure.
- robotic surgery it can be applied in connection with minimally invasive surgery such as laparoscopic surgery or endoscopic surgery.
- FIG. 2 is a flowchart schematically illustrating a method for predicting a surgery time based on a surgery image according to an exemplary embodiment of the present invention.
- the method for predicting the operation time based on the surgical image the step of obtaining a predetermined surgical image including a surgical operation for a specific surgical step (S100), Generating learning data using a predetermined surgery image and a surgery time required based on the predetermined surgery image (S200), and performing a learning based on the learning data to perform a surgery time in the specific surgery stage. It may include the step (S300) to predict.
- S100 a surgical operation for a specific surgical step
- Generating learning data using a predetermined surgery image and a surgery time required based on the predetermined surgery image (S200) the step (S300) to predict.
- the computer may acquire a predetermined surgical image including a surgical operation for a specific surgical step (S100).
- the medical staff may perform the actual surgery on the patient directly, as well as laparoscopes or endoscopes, including the surgical robot as described in FIG. Minimally invasive surgery may be performed.
- the computer may acquire a surgical image photographing a scene including a surgical operation performed in the surgical procedure, a surgical tool related thereto, a surgical site, and the like.
- the computer may acquire a surgical image photographing a scene including a surgical site and a surgical tool that is currently undergoing surgery from a camera entering the patient's body.
- each surgery eg, gastric cancer surgery, colorectal cancer surgery, etc.
- each surgery may be performed in different surgical procedures, but the same type of surgery may be performed in the same or similar series of surgical procedures. That is, when performing a specific surgery (for example, the same type of surgery, such as gastric cancer surgery) for a plurality of patients, it is possible to proceed with a specific surgical procedure by performing the same or similar surgical operations for each patient.
- a particular surgery may consist of predefined surgical procedures (ie, surgical steps) according to classification criteria.
- the computer may classify the surgical stages according to the time course of a specific surgery or classify the surgical stages corresponding to each surgical site based on the surgical site during the specific surgery.
- the computer may classify the surgical steps based on the position of the camera or the moving range of the camera during the specific surgery, or the surgical steps based on the change (eg replacement) of the surgical tool during the specific surgery. You can also classify them.
- the computer may classify each operation stage according to a specific classification criterion, and then define in advance the operation operations performed in each classified operation stage.
- the surgical steps constituting a particular surgery may be classified into a hierarchical structure.
- certain surgical procedures may be classified step by step from the lowest hierarchy (level) to the highest hierarchy (level) to form a hierarchical structure.
- the lowest layer is composed of the smallest units representing the surgical procedure, and may include a minimum operation operation having a meaning as one minimum operation.
- the computer may recognize a surgical operation that represents one constant motion pattern as a minimum operation motion such as cutting, grabbing, moving, etc., and configure it as the lowest hierarchy (ie, the lowest surgical step).
- the computer may be configured in a higher layer by grouping at least one minimal surgery operation according to a specific classification criteria (eg, time course, surgical site, camera movement, change of surgical tool, etc.). For example, when each of the minimum surgery operations indicating grabbing, moving, cutting, and the like of a surgical tool is performed in a predetermined order, and the predetermined order is connected, the minimum operation can be grouped into one higher layer when it has meaning as a specific operation. have. As another example, when each minimal surgical operation such as clipping, moving, or cutting is performed continuously, it may be recognized that this is a surgical operation for cutting blood vessels and may be grouped into one higher layer.
- a specific classification criteria eg, time course, surgical site, camera movement, change of surgical tool, etc.
- each minimal surgical operation such as grabbing, lifting, cutting, or kicking
- this is a surgical operation to remove fat and grouped into one higher layer.
- an upper operation operation ie, an upper operation step
- lower operation procedures that is, lower operation steps
- a hierarchical structure can be finally formed up to the highest hierarchy (ie, the highest surgical stage) of a specific surgical procedure.
- a particular surgery may be represented in a hierarchical structure, such as a tree, of the surgical steps constituting the specific surgery.
- the surgical image obtained in step S100 is obtained by performing a specific surgery on the patient, and may include image frames photographing any one of the surgical steps belonging to a specific layer in a specific surgical process having a hierarchical structure.
- the specific layer may mean a middle layer other than the lowest layer and the highest layer in the hierarchical structure.
- the surgical image obtained in step S100 may be composed of image frames including a series of predefined surgical operations for a particular surgical step of a particular surgery.
- the computer may obtain each surgical image corresponding to a specific surgery stage predefined in a specific surgery procedure from each of the plurality of patients.
- each of the surgical images obtained from the plurality of patients may be set to image frames including predetermined surgical operations performed at a specific surgical stage.
- a medical staff or a computer extracts only image frames corresponding to a specific surgical step from the surgical image. It may be generated in advance.
- a surgical image set with image frames including predetermined surgical operations performed at a specific surgical step is referred to as a predetermined surgical image.
- the computer may generate the learning data using the surgery time obtained based on the preset surgery image and the preset surgery image acquired in step S100 (S200).
- the computer may acquire the first surgery time based on the predetermined surgery image.
- the computer may generate a censored surgical image from which a predetermined section is removed from the preset surgical image, and acquire a second surgery time based on the censored surgical image.
- the computer may generate, as learning data, at least one of a predetermined surgery image and a middle-cut surgery image based on the first surgery time and the second surgery time. A detailed process thereof will be described with reference to FIG. 3.
- the present invention obtains a surgical image including the surgical operations performed in a specific surgical stage, and configured the learning data to perform the learning to more accurately predict the operating time required in each surgical stage Provide a method.
- the present invention provides a method for estimating the remaining surgery time based on the surgery image of the current time point obtained during the actual surgery.
- FIG. 3 is a view for explaining a process of generating learning data based on a surgical image according to an embodiment of the present invention.
- the computer may acquire predetermined surgical images from a plurality of patients (first patient to nth patient), respectively.
- each of the predetermined surgical images obtained from the plurality of patients even if the same operation steps are performed according to the patient's condition or the operation method of the medical staff instructing the operation as described above, the image frames included in each surgical image is different There can be.
- the present invention configures the learning data for learning based on a predetermined surgical image obtained from the patient, and by learning this to accurately predict the operation time in the corresponding surgical image (ie, the corresponding surgical stage).
- the computer may generate learning data by applying a survival analysis technique. That is, the computer may apply the survival analysis technique to analyze the time required for the operation based on the surgical image and generate the learning data based on this.
- the computer may generate a predetermined surgical image and a middle-cut surgical image in which random sections are removed therefrom (S210).
- the intermediately cut surgical image may be an image from which at least one image frame corresponding to an arbitrary section is removed from the predetermined surgical image.
- the predetermined surgery image may include an image frame of another surgery stage (eg, before or after surgery), or may include an additional surgery movement that is not a required surgery in the surgery stage. .
- the computer since the predetermined surgical image may not be an image including only the essential surgical operations of the corresponding surgical step, the computer generates a middle-cut surgical image in which a predetermined section is removed from the predetermined surgical image.
- the arbitrary section may be determined through a random function, or the section including the front or rear image frame of the surgical image may be determined as the random section.
- the corresponding section May be determined as an arbitrary interval.
- the computer may generate the first to n-th predetermined surgical images and the first to n-th cutaway surgical images respectively obtained from the first to nth patients.
- the midway cut surgical images are represented by image frames included in a box indicated by a dotted line.
- the computer acquires an operation time required to perform a specific surgery step based on a predetermined surgery image (hereinafter, referred to as a first surgery time), and is required to perform a specific surgery step based on the intermediately cut surgery image. It is possible to obtain the operation time required (hereinafter, the second operation time) (S220).
- the computer may include a first surgery time (eg, an actual duration of FIG. 3) of the first to n-th predetermined surgery images, and a second surgery to the first to n-th intermediate surgery images.
- a time duration (eg, Censored Duration of FIG. 3) may be obtained and compared.
- each operation time may be a time required when all the image frames in each surgical image is performed. For example, it may be a playback time of each surgical image.
- the computer may generate learning data using at least one of a predetermined surgery image and / or a mid-cut surgery image based on the first surgery time and the second surgery time (S230).
- the computer may include a first surgery time of the first predetermined surgery image (eg, Actual Duration of FIG. 3) and a second surgery time of the first censored surgery image (eg, FIG.
- a first surgery time of the first predetermined surgery image eg, Actual Duration of FIG. 3
- a second surgery time of the first censored surgery image eg, FIG.
- the computer may generate the learning data by mapping the surgery image (eg, the first predetermined surgery image) determined by the learning data from the first patient and the surgery time for the same.
- the computer in determining a surgical image to be used as learning data by comparing the first surgery time with the second surgery time, may use the length of the surgery time or may use a predetermined criterion. Or you may select arbitrarily. For example, the computer may select a surgical image having a shorter time between the first surgery time and the second surgery time. Alternatively, the difference between the first surgery time and the second surgery time may be used. For example, when the difference between the first surgery time and the second surgery time is more than a predetermined reference value, the computer may select the surgery image having the first surgery time as learning data.
- the computer is based on a result of comparing the first surgery time of the first to n-th predetermined surgery images and the second surgery time of the first to n-th intermediate cut surgery images, respectively.
- a predetermined surgical image or a middle-cut surgical image it may be configured as the learning data for a specific surgical step by mapping the corresponding operation time. For example, when a predetermined surgical image is selected, the computer may set the surgical status as “event” and record the surgical duration of the selected surgical image as learning data. Alternatively, when a mid-severed surgical image is selected, the computer may set the surgical status to "censored" and record the surgical duration of the selected surgical image as learning data. That is, the computer may generate a learning data set by using a predetermined surgical image or a middle-cut surgical image obtained from a plurality of patients undergoing surgery for a specific surgical stage.
- the computer may predict the operation time in a specific surgery step by performing the learning based on the learning data generated in step S200 (S300).
- the computer may perform the learning by using each image frame in the learning data and the operation time required to perform a specific surgery step based on each image frame.
- the computer can learn the average time required for surgery at a particular stage of surgery. For example, when the preset surgical image is generated as learning data, the computer may perform the learning based on each image frame in the preset surgical image and the first surgery time thereof. Alternatively, in the case where the severed surgical image is generated as the learning data, the computer may perform the learning based on each image frame in the severed surgical image and the second surgery time. A detailed process thereof will be described with reference to FIG. 4.
- FIG. 4 is a diagram illustrating a process of performing learning based on learning data according to an embodiment of the present invention.
- the computer may acquire learning data and perform the learning using deep learning.
- the learning data may be generated through the steps S210 to S230, and may be generated based on a predetermined surgical image or a middle-cut surgical image.
- the computer may receive the training data and extract each image frame L 1 to L N from the training data (S310).
- the computer may extract each image frame included in the preset surgical image from the input training data.
- the computer may extract each image frame included in the half-cut surgical image.
- the computer may learn from the extracted image frames by using a convolutional neural network (CNN), and as a result, may extract feature information about each image frame (S320).
- CNN convolutional neural network
- the computer may learn characteristics of the surgical image by inputting each image frame to at least one layer (eg, a convolution layer). As a result of this learning, the computer can infer what each image frame represents or represents.
- a convolution layer e.g., a convolution layer
- the computer may perform learning using a recurrent neural network (RNN) for each image frame derived as a result of learning using the CNN (S330).
- RNN recurrent neural network
- the computer may receive learning data in units of frames and perform learning using an RNN (eg, an LSTM method).
- the characteristic information of each image frame may be input, and the learning may be performed using the operation state information (eg, event / censored) of the corresponding learning data and the operation time required for the corresponding learning data.
- the computer may perform the learning by connecting the at least one image frame of the previous time point with the image frame of the current time point. Therefore, the computer learns the relationship between each image frame based on the characteristic information of each image frame, so as to determine whether the operation includes a certain operation stage predefined in a specific operation stage or whether the operation includes operation operation of another operation stage. I can grasp the information.
- the computer may predict a median surgical duration at a specific surgical stage through the learning process using the CNN and the RNN as described above (S340).
- the computer acquires a surgical image from each of a plurality of patients who have performed a specific surgical step and generates learning data based thereon. You can apply the same learning repeatedly as described in 4. Therefore, the computer repeatedly learns a large amount of learning data, thereby predicting an average time required for surgery at a specific stage of surgery.
- step S100 ⁇ S300 by performing the above-described step S100 ⁇ S300 can finally predict the operation time required to perform each surgical step in the surgical procedure. Accordingly, the computer can build a learning model for predicting the operation time of each operation stage.
- one embodiment of the present invention can be applied to the surgical process of the other patient by using the estimated operating time in each operation stage as described above.
- the computer may obtain a real surgery image of the patient to be operated in real time.
- the computer may extract an image frame of the current point of time from the acquired actual surgical image, and recognize the operation stage currently being performed based on the extracted image frame of the current point of view. Therefore, the computer may calculate the time required for the surgery from the current surgery stage to the current time based on the image frame of the current time.
- the computer can grasp the average operation time for the current operation stage through learning to predict the operation time of each operation stage as described above. Therefore, the computer uses the average operation time of the current operation stage predicted through the learning and the operation time until the current time calculated through the image frame of the current time that the actual operation is currently being performed. It is possible to estimate the remaining surgical time required to perform the surgical operations in the current surgical stage.
- the present invention it is possible not only to predict the time required for surgery at each operation stage, but also to accurately predict the remaining time remaining after the present time. In this way, by providing the correct operation time and the remaining operation time to the medical staff can be more accurately understand the current situation of the operation is progressing through it can be efficiently carried out the residual surgery process.
- the correct operation time and the remaining time of the operation must be known in order to proceed with the operation of the patient.
- the anesthesia time during the operation is important, in the present invention can provide the operation time and the remaining operation time in real time, it is possible to calculate the correct anesthetic dosage. In addition, it is effective to determine the appropriate additional anesthesia dose according to the remaining surgery time.
- FIG. 5 is a view schematically showing the configuration of an apparatus 200 for performing a method for predicting a surgery time based on a surgery image according to an embodiment of the present invention.
- the processor 210 may include a connection passage (eg, a bus or the like) that transmits and receives signals with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
- a connection passage eg, a bus or the like
- a graphic processor not shown
- / or other components May be included.
- the processor 210 executes one or more instructions stored in the memory 220 to perform a method of predicting a surgery time based on the surgery image described with reference to FIGS. 2 to 4.
- the processor 210 may acquire a predetermined surgery image including a surgery operation for a specific surgery stage by executing one or more instructions stored in the memory 220, the preset surgery image and the preset surgery image. Generating the learning data using the operation time obtained based on the step, and performing the learning based on the learning data may be performed to predict the operation time in the specific surgery step.
- the processor 210 is a random access memory (RAM) and a ROM (Read-Only Memory) for temporarily and / or permanently storing signals (or data) processed in the processor 210. , Not shown) may be further included.
- the processor 210 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
- SoC system on chip
- the memory 220 may store programs (one or more instructions) for processing and controlling the processor 210. Programs stored in the memory 220 may be divided into a plurality of modules according to their functions.
- the method of estimating the operation time based on the surgical image according to the embodiment of the present invention described above may be implemented as a program (or an application) to be executed in combination with a computer which is hardware and stored in a medium.
- the above-described program includes C, C ++, JAVA, machine language, etc. which can be read by the computer's processor (CPU) through the computer's device interface so that the computer reads the program and executes the methods implemented as the program.
- Code may be coded in the computer language of. Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedures related to execution procedures necessary for the computer's processor to execute the functions according to a predetermined procedure. can do.
- the code may further include memory reference code for additional information or media required for the computer's processor to execute the functions at which location (address address) of the computer's internal or external memory should be referenced. have.
- the code may be used to communicate with any other computer or server remotely using the communication module of the computer. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
- the stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device.
- examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the computer can access or various recording media on the computer of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable ROM
- EEPROM electrically erasable programmable ROM
- flash memory hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Theoretical Computer Science (AREA)
- Robotics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Pathology (AREA)
- Computer Graphics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
Provided is a method for predicting a surgery time on the basis of a surgery image. The method comprises the steps of: acquiring a preconfigured surgery image including a surgical operation for a specific surgery step; generating learning data by using the preconfigured surgery image and a surgery required time obtained on the basis of the preconfigured surgery image; and performing learning on the basis of the learning data so as to predict a surgery time in the specific surgery step.
Description
본 발명은 수술영상을 기초로 수술시간을 예측하는 방법 및 장치에 관한 것이다.The present invention relates to a method and apparatus for predicting a surgery time based on a surgical image.
의료수술은 개복수술(open surgery), 복강경 수술 및 로봇 수술을 포함하는 최소침습수술(MIS: Minimally Invasive Surgery), 방사선수술(radio surgery) 등으로 분류할 수 있다. 개복수술은 치료되어야 할 부분을 의료진이 직접 보고 만지며 시행하는 수술을 말하며, 최소침습수술은 키홀 수술(keyhole surgery)이라고도 하는데 복강경 수술과 로봇 수술이 대표적이다. 복강경 수술은 개복을 하지 않고 필요한 부분에 작은 구멍을 내어 특수 카메라가 부착된 복강경과 수술 도구를 몸속에 삽입하여 비디오 모니터를 통해서 관측하며 레이저나 특수기구를 이용하여 미세수술을 한다. 또한, 로봇수술은 수술로봇을 이용하여 최소 침습수술을 수행하는 것이다. 나아가 방사선 수술은 체외에서 방사선이나 레이저 광으로 수술 치료를 하는 것을 말한다.Medical surgery can be classified into open surgery, laparoscopic surgery, minimally invasive surgery (MIS) including radio surgery, radio surgery, and the like. Laparoscopic surgery refers to surgery performed by medical staff to see and touch the part to be treated. Minimally invasive surgery is also known as keyhole surgery, and laparoscopic surgery and robotic surgery are typical. In laparoscopic surgery, a small hole is made in a necessary part without opening, and a laparoscopic with a special camera is attached and a surgical tool is inserted into the body and observed through a video monitor. Microsurgery is performed using a laser or a special instrument. In addition, robot surgery is to perform minimally invasive surgery using a surgical robot. Furthermore, radiation surgery refers to surgical treatment with radiation or laser light outside the body.
이러한 의료수술의 경우, 실제 수술시 수술영상을 획득하여 이를 기초로 수술을 수행하는 경우가 많다. 따라서, 실제 수술시 획득되는 수술영상을 통해서 다양한 정보를 제공하여 주는 것이 중요하다. 특히, 수술과정을 진행함에 있어서 수술시간을 예측하여 이에 대한 정보를 제공할 필요가 있다. 이에, 실제 수술 시 획득되는 수술영상을 이용하여 수술시간을 예측할 수 있는 방법이 필요하다. In the case of such a medical operation, a surgical image is often obtained at the time of the actual operation and the surgery is performed based on this. Therefore, it is important to provide a variety of information through the surgical image obtained during the actual operation. In particular, it is necessary to provide information about the prediction of the operation time in the operation process. Thus, there is a need for a method that can predict the operation time using the surgical image acquired during the actual surgery.
본 발명이 해결하고자 하는 과제는 수술영상을 기초로 수술시간을 예측하는 방법 및 장치를 제공하는 것이다.The problem to be solved by the present invention is to provide a method and apparatus for predicting the operation time based on the surgical image.
본 발명이 해결하고자 하는 과제는 수술영상을 기초로 학습을 수행하여 각 수술단계에 대한 수술시간을 예측하는 방법 및 장치를 제공하는 것이다. The problem to be solved by the present invention is to provide a method and apparatus for predicting the operation time for each surgical step by performing the learning based on the surgical image.
본 발명이 해결하고자 하는 과제는 실시간으로 획득되는 수술영상을 통해 현재 수술단계에서 남아 있는 잔여과정을 수행하는데 필요한 잔여수술시간을 예측하는 방법 및 장치를 제공하는 것이다. The problem to be solved by the present invention is to provide a method and apparatus for estimating the remaining surgery time required to perform the remaining process remaining in the current surgical stage through the surgical image obtained in real time.
본 발명이 해결하고자 하는 과제는 환자의 실제수술과정을 통해 획득된 수술영상을 기초로 수술시간을 예측하는데 사용할 다양한 학습데이터를 생성하는 방법 및 장치를 제공하는 것이다. The problem to be solved by the present invention is to provide a method and apparatus for generating a variety of learning data to be used to predict the operation time based on the surgical image obtained through the actual operation of the patient.
본 발명이 해결하고자 하는 과제들은 이상에서 언급된 과제로 제한되지 않으며, 언급되지 않은 또 다른 과제들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.Problems to be solved by the present invention are not limited to the above-mentioned problems, and other problems not mentioned will be clearly understood by those skilled in the art from the following description.
본 발명의 일 실시예에 따른 컴퓨터가 수행하는 수술영상을 기초로 수술시간을 예측하는 방법은, 특정 수술단계에 대한 수술동작을 포함하는 기설정된 수술영상을 획득하는 단계, 상기 기설정된 수술영상 및 상기 기설정된 수술영상을 기초로 획득되는 수술소요시간을 이용하여 학습데이터를 생성하는 단계, 및 상기 학습데이터를 기초로 학습을 수행하여 상기 특정 수술단계에서의 수술시간을 예측하는 단계를 포함한다. According to an embodiment of the present invention, a method for predicting a surgery time based on a surgery image performed by a computer includes: obtaining a preset surgery image including a surgery operation for a specific surgery stage, the preset surgery image, and Generating learning data using the surgery time obtained based on the predetermined surgery image, and performing the learning based on the learning data to predict the surgery time in the specific surgery stage.
본 발명의 일 실시예에 있어서, 상기 학습데이터를 생성하는 단계는, 상기 기설정된 수술영상을 기초로 제1 수술소요시간을 획득하는 단계, 상기 기설정된 수술영상에서 임의의 구간을 제거한 중도절단된(censored) 수술영상을 생성하여, 상기 중도절단된 수술영상을 기초로 제2 수술소요시간을 획득하는 단계, 및 상기 제1 수술소요시간 및 상기 제2 수술소요시간을 기초로 상기 기설정된 수술영상 및 상기 중도절단된 수술영상 중 적어도 하나를 상기 학습데이터로 생성하는 단계를 포함할 수 있다. In an embodiment of the present disclosure, the generating of the learning data may include obtaining a first surgery time based on the preset surgery image, and removing a predetermined section from the preset surgery image. (censored) generating a surgical image, acquiring a second surgery time based on the intermediate cut surgery image, and the preset surgery image based on the first surgery time and the second surgery time And generating at least one of the intermediately cut surgical images as the learning data.
본 발명의 일 실시예에 있어서, 상기 수술시간을 예측하는 단계는, 상기 학습데이터 내 각 영상프레임 및 상기 각 영상프레임을 기초로 상기 특정 수술단계를 수행하는데 소요되는 수술소요시간을 이용하여 학습을 수행하는 단계, 및 상기 학습을 통해 상기 특정 수술단계에서의 평균 수술소요시간을 예측하는 단계를 포함할 수 있다.In one embodiment of the present invention, the step of predicting the operation time, learning using the operation time required to perform the specific operation step based on each image frame and each image frame in the learning data. Performing, and predicting an average surgery time in the specific surgery step through the learning.
본 발명의 일 실시예에 있어서, 상기 학습을 수행하는 단계는, 상기 기설정된 수술영상을 상기 학습데이터로 생성한 경우, 상기 기설정된 수술영상 내 각 영상프레임 및 상기 제1 수술소요시간을 기초로 학습을 수행할 수 있다.In one embodiment of the present invention, the step of performing the learning, when the predetermined surgical image is generated as the learning data, based on each image frame and the first surgery time in the predetermined surgical image Learning can be done.
본 발명의 일 실시예에 있어서, 상기 학습을 수행하는 단계는, 상기 중도절단된 수술영상을 상기 학습데이터로 생성한 경우, 상기 중도절단된 수술영상 내 각 영상프레임 및 상기 제2 수술소요시간을 기초로 학습을 수행할 수 있다. In one embodiment of the present invention, the step of performing the learning, each of the image frame and the second surgery time in the intermediate-cut surgical image generated when the intermediate-cut surgical image You can learn on the basis.
본 발명의 일 실시예에 있어서, 실제수술과정에서 상기 특정 수술단계를 수행하여 실제수술영상을 획득하는 단계를 더 포함하며, 상기 수술시간을 예측하는 단계는, 상기 실제수술영상 내 현재시점의 영상프레임을 기초로 상기 특정 수술단계를 수행하는데 소요된 현재시점까지의 수술소요시간을 획득하는 단계, 및 상기 학습을 통해 예측된 상기 특정 수술단계에서의 평균 수술소요시간 및 상기 현재시점까지의 수술소요시간을 기초로 상기 현재시점 이후에 상기 특정 수술단계를 수행하는데 필요한 잔여수술시간을 예측하는 단계를 포함할 수 있다. In one embodiment of the present invention, further comprising the step of obtaining the actual surgery image by performing the specific surgery step in the actual surgery process, the step of predicting the operation time, the image of the current time point in the actual surgery image Acquiring an operation time required up to the present time required to perform the specific operation step based on a frame, and an average operation time required for the specific operation step predicted through the learning and an operation time required up to the current time point The method may include estimating the remaining surgery time required to perform the specific surgery step after the current time based on time.
본 발명의 일 실시예에 있어서, 상기 기설정된 수술영상을 획득하는 단계는, 복수의 환자로부터 각각 상기 특정 수술단계를 수행한 기설정된 수술영상을 획득할 수 있다.In one embodiment of the present invention, the obtaining of the predetermined surgical image, it is possible to obtain a predetermined surgical image performing the specific surgical step, respectively from a plurality of patients.
본 발명의 일 실시예에 있어서, 상기 특정 수술단계는, 수술동작에 따라 계층 구조로 이루어진 수술과정 상에서 특정 계층에 속하는 어느 하나의 수술단계일 수 있다. In one embodiment of the present invention, the specific surgical step may be any one of the surgical steps belonging to a specific hierarchy in the surgical process consisting of a hierarchical structure according to the operation.
본 발명의 일 실시예에 따른 장치는, 하나 이상의 인스트럭션을 저장하는 메모리, 및 상기 메모리에 저장된 상기 하나 이상의 인스트럭션을 실행하는 프로세서를 포함하며, 상기 프로세서는 상기 하나 이상의 인스트럭션을 실행함으로써, 특정 수술단계에 대한 수술동작을 포함하는 기설정된 수술영상을 획득하는 단계, 상기 기설정된 수술영상 및 상기 기설정된 수술영상을 기초로 획득되는 수술소요시간을 이용하여 학습데이터를 생성하는 단계, 및 상기 학습데이터를 기초로 학습을 수행하여 상기 특정 수술단계에서의 수술시간을 예측하는 단계를 수행한다. An apparatus according to an embodiment of the present invention includes a memory for storing one or more instructions, and a processor for executing the one or more instructions stored in the memory, wherein the processor executes the one or more instructions to perform a specific surgical step. Acquiring a predetermined surgical image including a surgical operation for the step, generating learning data using a surgery time required based on the predetermined surgical image and the predetermined surgical image, and the learning data Learning is performed based on the step of predicting the operation time in the specific surgery stage.
본 발명의 일 실시예에 따른 컴퓨터프로그램은 하드웨어인 컴퓨터와 결합되어, 상기 수술영상을 기초로 수술시간을 예측하는 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된다. A computer program according to an embodiment of the present invention is combined with a computer, which is hardware, and stored in a computer-readable recording medium to perform a method of predicting an operation time based on the surgery image.
본 발명에 따르면, 수술영상을 기초로 각 수술단계에 대한 수술소요시간 및 수술잔여시간을 예측할 수 있다. 예측된 수술소요시간 및 수술잔여시간을 의료진들에게 제공함으로써, 현재 수술이 진행되고 있는 상황을 보다 정확히 파악할 수 있게 하고 이를 통해 잔여수술과정을 효율적으로 진행할 수 있게 한다. According to the present invention, it is possible to predict the operation time and the operation remaining time for each operation stage based on the surgical image. Providing the estimated time required for surgery and the time remaining for the operation to the medical staff, it is possible to more accurately understand the current situation of the operation is progressing through it, it is possible to proceed efficiently with the remaining surgery.
본 발명에 따르면, 수술소요시간 및 수술잔여시간을 실시간으로 제공하여 줄 수 있고, 이를 통해 정확한 마취 투여량을 산출할 수 있다. 나아가, 남은 수술시간에 따라 적정한 추가 마취 투여량 등을 파악하기 효과적이다.According to the present invention, it is possible to provide the time required for surgery and the operation remaining time in real time, through which an anesthetic dosage can be calculated. In addition, it is effective to determine the appropriate additional anesthesia dose according to the remaining surgery time.
본 발명에 따르면, 환자의 실제수술과정을 통해 획득된 수술영상을 기초로 다양한 학습데이터를 생성할 수 있고, 이를 통해서 보다 효과적으로 수술시간 예측을 위한 학습을 수행할 수 있다. 또한, 다양한 학습데이터를 이용하여 학습을 수행함으로써 각 수술단계에 대한 정확한 예측 수술시간을 도출할 수 있다. According to the present invention, it is possible to generate a variety of learning data on the basis of the surgical image obtained through the actual surgical process of the patient, through which the learning for the prediction of the surgery time can be performed more effectively. In addition, by performing the learning using a variety of learning data it is possible to derive the correct prediction operation time for each surgical step.
본 발명의 효과들은 이상에서 언급된 효과로 제한되지 않으며, 언급되지 않은 또 다른 효과들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.Effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned will be clearly understood by those skilled in the art from the following description.
도 1은 본 발명의 일 실시예에 따라 로봇수술을 수행할 수 있는 시스템을 간략하게 도식화한 도면이다.1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 수술영상을 기초로 수술시간을 예측하는 방법을 개략적으로 도시한 흐름도이다.2 is a flowchart schematically illustrating a method for predicting a surgery time based on a surgery image according to an exemplary embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 수술영상을 기초로 학습데이터를 생성하는 과정을 설명하기 위한 도면이다.3 is a view for explaining a process of generating learning data based on a surgical image according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 학습데이터를 기초로 학습을 수행하는 과정을 설명하기 위한 도면이다. 4 is a diagram illustrating a process of performing learning based on learning data according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 수술영상을 기초로 수술시간을 예측하는 방법을 수행하는 장치(200)의 구성을 개략적으로 나타내는 도면이다.5 is a view schematically showing the configuration of an apparatus 200 for performing a method for predicting a surgery time based on a surgery image according to an embodiment of the present invention.
본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나, 본 발명은 이하에서 개시되는 실시예들에 제한되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하고, 본 발명이 속하는 기술 분야의 통상의 기술자에게 본 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다.Advantages and features of the present invention and methods for achieving them will be apparent with reference to the embodiments described below in detail with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but can be embodied in various different forms, and the present embodiments only make the disclosure of the present invention complete, and those of ordinary skill in the art to which the present invention belongs. It is provided to fully inform the skilled worker of the scope of the invention, which is defined only by the scope of the claims.
본 명세서에서 사용된 용어는 실시예들을 설명하기 위한 것이며 본 발명을 제한하고자 하는 것은 아니다. 본 명세서에서, 단수형은 문구에서 특별히 언급하지 않는 한 복수형도 포함한다. 명세서에서 사용되는 "포함한다(comprises)" 및/또는 "포함하는(comprising)"은 언급된 구성요소 외에 하나 이상의 다른 구성요소의 존재 또는 추가를 배제하지 않는다. 명세서 전체에 걸쳐 동일한 도면 부호는 동일한 구성 요소를 지칭하며, "및/또는"은 언급된 구성요소들의 각각 및 하나 이상의 모든 조합을 포함한다. 비록 "제1", "제2" 등이 다양한 구성요소들을 서술하기 위해서 사용되나, 이들 구성요소들은 이들 용어에 의해 제한되지 않음은 물론이다. 이들 용어들은 단지 하나의 구성요소를 다른 구성요소와 구별하기 위하여 사용하는 것이다. 따라서, 이하에서 언급되는 제1 구성요소는 본 발명의 기술적 사상 내에서 제2 구성요소일 수도 있음은 물론이다.The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. In this specification, the singular also includes the plural unless specifically stated otherwise in the phrase. As used herein, "comprises" and / or "comprising" does not exclude the presence or addition of one or more other components in addition to the mentioned components. Like reference numerals refer to like elements throughout, and "and / or" includes each and all combinations of one or more of the mentioned components. Although "first", "second", etc. are used to describe various components, these components are of course not limited by these terms. These terms are only used to distinguish one component from another. Therefore, of course, the first component mentioned below may be a second component within the technical spirit of the present invention.
다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어(기술 및 과학적 용어를 포함)는 본 발명이 속하는 기술분야의 통상의 기술자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또한, 일반적으로 사용되는 사전에 정의되어 있는 용어들은 명백하게 특별히 정의되어 있지 않는 한 이상적으로 또는 과도하게 해석되지 않는다.Unless otherwise defined, all terms used in the present specification (including technical and scientific terms) may be used in a sense that can be commonly understood by those skilled in the art. In addition, terms that are defined in a commonly used dictionary are not ideally or excessively interpreted unless they are specifically defined clearly.
명세서에서 사용되는 "부" 또는 “모듈”이라는 용어는 소프트웨어, FPGA 또는 ASIC과 같은 하드웨어 구성요소를 의미하며, "부" 또는 “모듈”은 어떤 역할들을 수행한다. 그렇지만 "부" 또는 “모듈”은 소프트웨어 또는 하드웨어에 한정되는 의미는 아니다. "부" 또는 “모듈”은 어드레싱할 수 있는 저장 매체에 있도록 구성될 수도 있고 하나 또는 그 이상의 프로세서들을 재생시키도록 구성될 수도 있다. 따라서, 일 예로서 "부" 또는 “모듈”은 소프트웨어 구성요소들, 객체지향 소프트웨어 구성요소들, 클래스 구성요소들 및 태스크 구성요소들과 같은 구성요소들과, 프로세스들, 함수들, 속성들, 프로시저들, 서브루틴들, 프로그램 코드의 세그먼트들, 드라이버들, 펌웨어, 마이크로 코드, 회로, 데이터, 데이터베이스, 데이터구조들, 테이블들, 어레이들 및 변수들을 포함한다. 구성요소들과 "부" 또는 “모듈”들 안에서 제공되는 기능은 더 작은 수의 구성요소들 및 "부" 또는 “모듈”들로 결합되거나 추가적인 구성요소들과 "부" 또는 “모듈”들로 더 분리될 수 있다.As used herein, the term "part" or "module" refers to a hardware component such as software, FPGA, or ASIC, and the "part" or "module" plays certain roles. However, "part" or "module" is not meant to be limited to software or hardware. The “unit” or “module” may be configured to be in an addressable storage medium or may be configured to play one or more processors. Thus, as an example, a "part" or "module" may include components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, Procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Functions provided within components and "parts" or "modules" may be combined into smaller numbers of components and "parts" or "modules" or into additional components and "parts" or "modules". Can be further separated.
본 명세서에서 "컴퓨터"는 연산처리를 수행하여 사용자에게 결과를 제공할 수 있는 다양한 장치들이 모두 포함된다. 예를 들어, 컴퓨터는 데스크 탑 PC, 노트북(Note Book) 뿐만 아니라 스마트폰(Smart phone), 태블릿 PC, 셀룰러폰(Cellular phone), 피씨에스폰(PCS phone; Personal Communication Service phone), 동기식/비동기식 IMT-2000(International Mobile Telecommunication-2000)의 이동 단말기, 팜 PC(Palm Personal Computer), 개인용 디지털 보조기(PDA; Personal Digital Assistant) 등도 해당될 수 있다. 또한, 헤드마운트 디스플레이(Head Mounted Display; HMD) 장치가 컴퓨팅 기능을 포함하는 경우, HMD장치가 컴퓨터가 될 수 있다. 또한, 컴퓨터는 클라이언트로부터 요청을 수신하여 정보처리를 수행하는 서버가 해당될 수 있다.As used herein, the term "computer" includes all the various devices capable of performing arithmetic processing to provide a result to a user. For example, a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable. In addition, when a head mounted display (HMD) device includes a computing function, the HMD device may be a computer. Also, the computer may correspond to a server that receives a request from a client and performs information processing.
이하, 첨부된 도면을 참조하여 본 발명의 실시예를 상세하게 설명한다.Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 일 실시예에 따라 로봇수술을 수행할 수 있는 시스템을 간략하게 도식화한 도면이다.1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
도 1에 따르면, 로봇수술 시스템은 의료영상 촬영장비(10), 서버(100) 및 수술실에 구비된 제어부(30), 디스플레이(32) 및 수술로봇(34)을 포함한다. 실시 예에 따라서, 의료영상 촬영장비(10)는 개시된 실시 예에 따른 로봇수술 시스템에서 생략될 수 있다.According to FIG. 1, the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34. According to an embodiment, the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
일 실시 예에서, 수술로봇(34)은 촬영장치(36) 및 수술도구(38)를 포함한다.In one embodiment, surgical robot 34 includes imaging device 36 and surgical instrument 38.
일 실시 예에서, 로봇수술은 사용자가 제어부(30)를 이용하여 수술용 로봇(34)을 제어함으로써 수행된다. 일 실시 예에서, 로봇수술은 사용자의 제어 없이 제어부(30)에 의하여 자동으로 수행될 수도 있다.In one embodiment, the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
서버(100)는 적어도 하나의 프로세서와 통신부를 포함하는 컴퓨팅 장치이다.The server 100 is a computing device including at least one processor and a communication unit.
제어부(30)는 적어도 하나의 프로세서와 통신부를 포함하는 컴퓨팅 장치를 포함한다. 일 실시 예에서, 제어부(30)는 수술용 로봇(34)을 제어하기 위한 하드웨어 및 소프트웨어 인터페이스를 포함한다.The controller 30 includes a computing device including at least one processor and a communication unit. In one embodiment, the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
촬영장치(36)는 적어도 하나의 이미지 센서를 포함한다. 즉, 촬영장치(36)는 적어도 하나의 카메라 장치를 포함하여, 대상체, 즉 수술부위를 촬영하는 데 이용된다. 일 실시 예에서, 촬영장치(36)는 수술로봇(34)의 수술 암(arm)과 결합된 적어도 하나의 카메라를 포함한다.The imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
일 실시 예에서, 촬영장치(36)에서 촬영된 영상은 디스플레이(340)에 표시된다.In an embodiment, the image photographed by the photographing apparatus 36 is displayed on the display 340.
일 실시 예에서, 수술로봇(34)은 수술부위의 절단, 클리핑, 고정, 잡기 동작 등을 수행할 수 있는 하나 이상의 수술도구(38)를 포함한다. 수술도구(38)는 수술로봇(34)의 수술 암과 결합되어 이용된다.In one embodiment, surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site. Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
제어부(30)는 서버(100)로부터 수술에 필요한 정보를 수신하거나, 수술에 필요한 정보를 생성하여 사용자에게 제공한다. 예를 들어, 제어부(30)는 생성 또는 수신된, 수술에 필요한 정보를 디스플레이(32)에 표시한다.The controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
예를 들어, 사용자는 디스플레이(32)를 보면서 제어부(30)를 조작하여 수술로봇(34)의 움직임을 제어함으로써 로봇수술을 수행한다.For example, the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
서버(100)는 의료영상 촬영장비(10)로부터 사전에 촬영된 대상체의 의료영상데이터를 이용하여 로봇수술에 필요한 정보를 생성하고, 생성된 정보를 제어부(30)에 제공한다. The server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
제어부(30)는 서버(100)로부터 수신된 정보를 디스플레이(32)에 표시함으로써 사용자에게 제공하거나, 서버(100)로부터 수신된 정보를 이용하여 수술로봇(34)을 제어한다.The controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
일 실시 예에서, 의료영상 촬영장비(10)에서 사용될 수 있는 수단은 제한되지 않으며, 예를 들어 CT, X-Ray, PET, MRI 등 다른 다양한 의료영상 획득수단이 사용될 수 있다. In one embodiment, the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
상술한 바와 같이, 로봇수술을 수행할 경우 수술 과정에서 촬영된 수술영상 또는 수술로봇의 제어과정에서 다양한 수술정보를 포함하는 데이터를 획득할 수 있다. 이와 같이, 본 발명에서는 수술과정에서 획득할 수 있는 수술영상이나 수술데이터를 학습용 자료로 사용하여 학습을 수행하고, 이러한 학습을 통해 다른 환자의 수술 과정에 활용할 수 있는 방법을 제공하고자 한다. As described above, when performing a robot operation, it is possible to obtain data including various surgical information in the surgical image taken during the surgical procedure or the control of the surgical robot. As described above, the present invention is to perform the learning using the surgical image or the surgical data that can be obtained in the surgical process as a learning material, and to provide a method that can be utilized in the surgical process of other patients through such learning.
이하에서는 설명의 편의를 위해 "컴퓨터"가 본 명세서에서 개시되는 실시예에 따른 수술영상을 기초로 수술시간을 예측하는 방법을 수행하는 것으로 설명한다. "컴퓨터"는 도 1의 서버(100) 또는 제어부(30)를 의미할 수 있으나, 이에 한정되는 것은 아니고 컴퓨팅 처리를 수행할 수 있는 장치를 포괄하는 의미로 사용될 수 있다. 예를 들어, 컴퓨터는 도 1에 도시된 장치와는 별도로 구비된 컴퓨팅 장치일 수도 있다. Hereinafter, for convenience of description, it will be described that a "computer" performs a method of predicting a surgery time based on a surgery image according to an embodiment disclosed herein. "Computer" may mean the server 100 or the controller 30 of FIG. 1, but is not limited thereto and may be used to encompass a device capable of performing computing processing. For example, the computer may be a computing device provided separately from the device shown in FIG. 1.
또한, 이하에서 개시되는 실시예들은 도 1에 도시된 로봇수술 시스템과 연관되어서만 적용될 수 있는 것은 아니고, 수술과정에서 수술영상을 획득하고 이를 활용할 수 있는 모든 종류의 실시예들에 적용될 수 있다. 예를 들어, 로봇수술 이외에도 복강경 수술이나 내시경을 이용한 수술 등의 최소침습수술과 연관되어서 적용될 수 있다. In addition, the embodiments disclosed below may not be applicable only in connection with the robotic surgery system illustrated in FIG. 1, but may be applied to all kinds of embodiments that may acquire and utilize a surgical image in a surgical procedure. For example, in addition to robotic surgery, it can be applied in connection with minimally invasive surgery such as laparoscopic surgery or endoscopic surgery.
도 2는 본 발명의 일 실시예에 따른 수술영상을 기초로 수술시간을 예측하는 방법을 개략적으로 도시한 흐름도이다. 2 is a flowchart schematically illustrating a method for predicting a surgery time based on a surgery image according to an exemplary embodiment of the present invention.
도 2를 참조하면, 본 발명의 일 실시예에 따른 수술영상을 기초로 수술시간을 예측하는 방법은, 특정 수술단계에 대한 수술동작을 포함하는 기설정된 수술영상을 획득하는 단계(S100), 상기 기설정된 수술영상 및 상기 기설정된 수술영상을 기초로 획득되는 수술소요시간을 이용하여 학습데이터를 생성하는 단계(S200), 및 상기 학습데이터를 기초로 학습을 수행하여 상기 특정 수술단계에서의 수술시간을 예측하는 단계(S300)를 포함할 수 있다. 이하, 각 단계에 대한 상세한 설명을 기재한다.Referring to Figure 2, the method for predicting the operation time based on the surgical image according to an embodiment of the present invention, the step of obtaining a predetermined surgical image including a surgical operation for a specific surgical step (S100), Generating learning data using a predetermined surgery image and a surgery time required based on the predetermined surgery image (S200), and performing a learning based on the learning data to perform a surgery time in the specific surgery stage. It may include the step (S300) to predict. Hereinafter, a detailed description of each step will be described.
컴퓨터는 특정 수술단계에 대한 수술동작을 포함하는 기설정된 수술영상을 획득할 수 있다(S100).The computer may acquire a predetermined surgical image including a surgical operation for a specific surgical step (S100).
환자에 대해 특정 수술(예컨대, 위암 수술, 대장암 수술 등)을 수행할 경우, 의료진들은 환자에 대해 직접 실제수술을 수행할 수도 있고, 도 1에서 설명한 바와 같은 수술로봇을 비롯하여 복강경이나 내시경 등을 이용한 최소침습수술을 수행할 수도 있다. 이때, 컴퓨터는 수술과정에서 행해진 수술동작 및 이와 관련된 수술도구, 수술부위 등을 포함하는 장면을 촬영한 수술영상을 획득할 수 있다. 일 실시예로, 컴퓨터는 환자의 신체내부로 진입한 카메라로부터 현재 수술이 행해지고 있는 수술부위 및 수술도구를 포함하는 장면을 촬영한 수술영상을 획득할 수 있다. When performing certain operations (eg, gastric cancer surgery, colorectal cancer surgery, etc.) on the patient, the medical staff may perform the actual surgery on the patient directly, as well as laparoscopes or endoscopes, including the surgical robot as described in FIG. Minimally invasive surgery may be performed. In this case, the computer may acquire a surgical image photographing a scene including a surgical operation performed in the surgical procedure, a surgical tool related thereto, a surgical site, and the like. In an embodiment, the computer may acquire a surgical image photographing a scene including a surgical site and a surgical tool that is currently undergoing surgery from a camera entering the patient's body.
한편, 각 수술(예컨대, 위암 수술, 대장암 수술 등)은 서로 상이한 수술과정으로 진행되지만 동일한 유형의 수술은 동일하거나 유사한 일련의 수술과정으로 진행될 수 있다. 즉, 복수의 환자에 대해 특정 수술(예컨대, 위암 수술과 같은 동일 유형의 수술)을 수행할 경우, 각 환자에 대해 서로 동일하거나 유사한 수술동작들을 행함으로써 특정 수술과정을 진행시킬 수 있다.Meanwhile, each surgery (eg, gastric cancer surgery, colorectal cancer surgery, etc.) may be performed in different surgical procedures, but the same type of surgery may be performed in the same or similar series of surgical procedures. That is, when performing a specific surgery (for example, the same type of surgery, such as gastric cancer surgery) for a plurality of patients, it is possible to proceed with a specific surgical procedure by performing the same or similar surgical operations for each patient.
일 실시예로, 특정 수술은 분류 기준에 따라 미리 정의된 수술과정(즉, 수술단계)들로 구성될 수 있다. 예를 들어, 컴퓨터는 특정 수술이 진행되는 시간 경과에 따라 수술단계들을 분류하거나, 특정 수술 시의 수술부위를 기준으로 각 수술부위에 대응하여 수술단계들을 분류할 수도 있다. 또는, 컴퓨터는 특정 수술이 진행되는 동안 카메라의 위치나 카메라의 이동 범위를 기준으로 수술단계들을 분류할 수도 있고, 특정 수술이 진행되는 동안 수술도구의 변화(예컨대, 교체 등)를 기준으로 수술단계들을 분류할 수도 있다. 컴퓨터는 이와 같이 특정 분류 기준에 따라 각 수술단계들을 분류한 다음, 분류된 각 수술단계에서 행해지는 수술동작들을 미리 정의하여 둘 수 있다. In one embodiment, a particular surgery may consist of predefined surgical procedures (ie, surgical steps) according to classification criteria. For example, the computer may classify the surgical stages according to the time course of a specific surgery or classify the surgical stages corresponding to each surgical site based on the surgical site during the specific surgery. Alternatively, the computer may classify the surgical steps based on the position of the camera or the moving range of the camera during the specific surgery, or the surgical steps based on the change (eg replacement) of the surgical tool during the specific surgery. You can also classify them. The computer may classify each operation stage according to a specific classification criterion, and then define in advance the operation operations performed in each classified operation stage.
또한, 특정 수술을 구성하는 수술단계들은 계층적인 구조로 분류될 수 있다. 일 실시예로, 특정 수술과정은 최하위 계층(레벨)부터 최상위 계층(레벨)까지 단계적으로 분류되어 계층 구조를 형성할 수 있다. 여기서, 최하위 계층은 수술과정을 나타내는 가장 작은 단위들로 구성되며, 하나의 최소 동작으로서 의미를 가지는 최소수술동작을 포함할 수 있다. 예를 들어, 컴퓨터는 자르기, 잡기, 이동 등과 같이 하나의 일정한 동작 패턴을 나타내는 수술동작을 최소수술동작으로 인식하고 이를 최하위 계층(즉, 최하위 수술단계)로 구성할 수 있다. 또한, 컴퓨터는 특정한 분류 기준(예컨대, 시간 경과, 수술부위, 카메라의 이동, 수술도구의 변화 등)에 따라 적어도 하나의 최소수술동작을 그룹화하여 상위 계층으로 구성할 수 있다. 예를 들어, 수술도구의 잡기, 이동, 자르기 등을 나타내는 각 최소수술동작이 소정의 순서로 수행되고 이러한 소정의 순서를 연결하면 특정한 동작으로서 의미를 가질 때, 이를 하나의 상위 계층으로 그룹화할 수 있다. 다른 예로, 클리핑, 이동, 자르기 등의 각 최소수술동작이 연속적으로 수행되는 경우, 이는 혈관을 자르는 수술동작이라는 것을 인식하고 이를 하나의 상위 계층으로 그룹화할 수 있다. 또 다른 예로, 잡기, 들어올리기, 자르기, 걷어내기 등의 각 최소수술동작이 연속적으로 수행되는 경우, 이는 지방을 걷어내는 수술동작이라는 것을 인식하고 이를 하나의 상위 계층으로 그룹화할 수 있다. 이와 같이, 하위 계층의 수술동작들(즉, 하위 수술단계들)로부터 특정한 의미를 가지는 상위의 수술동작(즉, 상위 수술단계)으로 인식되는 경우, 이를 상위 계층으로 분류할 수 있고 이러한 과정을 반복적으로 수행할 수 있다. 따라서, 최종적으로 특정 수술과정의 최상위 계층(즉, 최상위 수술단계)까지 계층 구조를 형성할 수 있다. 예컨대, 특정 수술은 특정 수술을 구성하고 있는 수술단계들을 트리 형태와 같은 계층 구조로 표현될 수 있다. In addition, the surgical steps constituting a particular surgery may be classified into a hierarchical structure. In one embodiment, certain surgical procedures may be classified step by step from the lowest hierarchy (level) to the highest hierarchy (level) to form a hierarchical structure. Here, the lowest layer is composed of the smallest units representing the surgical procedure, and may include a minimum operation operation having a meaning as one minimum operation. For example, the computer may recognize a surgical operation that represents one constant motion pattern as a minimum operation motion such as cutting, grabbing, moving, etc., and configure it as the lowest hierarchy (ie, the lowest surgical step). In addition, the computer may be configured in a higher layer by grouping at least one minimal surgery operation according to a specific classification criteria (eg, time course, surgical site, camera movement, change of surgical tool, etc.). For example, when each of the minimum surgery operations indicating grabbing, moving, cutting, and the like of a surgical tool is performed in a predetermined order, and the predetermined order is connected, the minimum operation can be grouped into one higher layer when it has meaning as a specific operation. have. As another example, when each minimal surgical operation such as clipping, moving, or cutting is performed continuously, it may be recognized that this is a surgical operation for cutting blood vessels and may be grouped into one higher layer. As another example, when each minimal surgical operation such as grabbing, lifting, cutting, or kicking is performed in succession, it may be recognized that this is a surgical operation to remove fat and grouped into one higher layer. As such, when it is recognized as an upper operation operation (ie, an upper operation step) having a specific meaning from lower operation procedures (that is, lower operation steps), it may be classified into a higher layer and this process may be repeated. It can be done with Thus, a hierarchical structure can be finally formed up to the highest hierarchy (ie, the highest surgical stage) of a specific surgical procedure. For example, a particular surgery may be represented in a hierarchical structure, such as a tree, of the surgical steps constituting the specific surgery.
따라서, 단계 S100에서 획득된 수술영상은 환자에 대해 특정 수술을 수행함에 따라 획득된 것으로, 계층 구조로 이루어진 특정 수술과정 상에서 특정 계층에 속하는 어느 하나의 수술단계를 촬영한 영상프레임들을 포함하고 있을 수 있다. 여기서, 특정 계층은 계층 구조에서 최하위 계층 및 최상위 계층을 제외한 나머지 중간 계층을 의미하는 것일 수 있다. Therefore, the surgical image obtained in step S100 is obtained by performing a specific surgery on the patient, and may include image frames photographing any one of the surgical steps belonging to a specific layer in a specific surgical process having a hierarchical structure. have. Here, the specific layer may mean a middle layer other than the lowest layer and the highest layer in the hierarchical structure.
또한, 단계 S100에서 획득된 수술영상은 특정 수술의 특정 수술단계에 대해 미리 정의된 일련의 수술동작들을 포함하는 영상프레임들로 구성될 수 있다. 일 실시예로, 복수의 환자에 대해 동일한 유형의 특정 수술이 수행된 경우, 컴퓨터는 복수의 환자 각각으로부터 특정 수술과정 상에서 미리 정의되어 있는 특정 수술단계에 해당하는 각각의 수술영상을 획득할 수 있다. 다시 말해, 복수의 환자로부터 획득된 각각의 수술영상은 특정 수술단계에서 행해지는 미리 정해진 수술동작들을 포함하는 영상프레임들로 설정될 수 있다. 이와 같이, 환자로부터 획득된 수술영상에 대해 특정 수술단계에서 행해지는 미리 정해진 수술동작들을 포함하는 영상프레임들로 설정함에 있어서, 의료진 또는 컴퓨터가 수술영상으로부터 특정 수술단계에 해당하는 영상프레임들만 추출하여 미리 생성한 것일 수 있다.In addition, the surgical image obtained in step S100 may be composed of image frames including a series of predefined surgical operations for a particular surgical step of a particular surgery. In one embodiment, when a specific surgery of the same type is performed on a plurality of patients, the computer may obtain each surgical image corresponding to a specific surgery stage predefined in a specific surgery procedure from each of the plurality of patients. . In other words, each of the surgical images obtained from the plurality of patients may be set to image frames including predetermined surgical operations performed at a specific surgical stage. As such, in setting image frames including predetermined surgical operations performed in a specific surgical step on a surgical image obtained from a patient, a medical staff or a computer extracts only image frames corresponding to a specific surgical step from the surgical image. It may be generated in advance.
이하에서는 설명의 편의를 위해서, 특정 수술단계에서 행해지는 미리 정해진 수술동작들을 포함하는 영상프레임들로 설정된 수술영상을 기설정된 수술영상이라 지칭한다. Hereinafter, for convenience of description, a surgical image set with image frames including predetermined surgical operations performed at a specific surgical step is referred to as a predetermined surgical image.
컴퓨터는 단계 S100에서 획득된 기설정된 수술영상 및 기설정된 수술영상을 기초로 획득되는 수술소요시간을 이용하여 학습데이터를 생성할 수 있다(S200).The computer may generate the learning data using the surgery time obtained based on the preset surgery image and the preset surgery image acquired in step S100 (S200).
일 실시예로, 컴퓨터는 기설정된 수술영상을 기초로 제1 수술소요시간을 획득할 수 있다. 그리고, 컴퓨터는 기설정된 수술영상에서 임의의 구간을 제거한 중도절단된(censored) 수술영상을 생성하고, 중도절단된 수술영상을 기초로 제2 수술소요시간을 획득할 수 있다. 컴퓨터는 제1 수술소요시간 및 제2 수술소요시간을 기초로 기설정된 수술영상 및 중도절단된 수술영상 중 적어도 하나를 학습데이터로 생성할 수 있다. 이에 대한 구체적인 과정은 도 3을 참조하여 설명하도록 한다. In an embodiment, the computer may acquire the first surgery time based on the predetermined surgery image. The computer may generate a censored surgical image from which a predetermined section is removed from the preset surgical image, and acquire a second surgery time based on the censored surgical image. The computer may generate, as learning data, at least one of a predetermined surgery image and a middle-cut surgery image based on the first surgery time and the second surgery time. A detailed process thereof will be described with reference to FIG. 3.
한편, 복수의 환자에 대해 특정 수술단계를 동일하게 적용하더라도, 각 환자의 상태나 수술을 집도하는 각 의료진의 수술방식 등에 따라 변동성이 발생할 수 있다. 따라서, 각 환자에 대해 특정 수술단계를 적용할 때 이를 수행하는데 소요되는 실제수술시간은 각 환자마다 상이할 수 있다. 이에, 각 환자마다 동일한 수술을 수행하더라도 정확한 수술소요시간이나 잔여수술시간을 예측하기가 힘들다. 이러한 문제점을 해결하고자 본 발명에서는 특정 수술단계에서 행해지는 수술동작들을 포함하고 있는 수술영상을 획득하고, 이를 학습용 데이터로 구성하여 학습을 수행함으로써 각 수술단계에서 소요되는 수술시간을 보다 정확하게 예측할 수 있는 방법을 제공한다. 또한, 실제수술과정에서 획득된 현재시점의 수술영상을 바탕으로 잔여수술시간을 추정할 수 있는 방법을 제공한다. On the other hand, even if a specific surgical step is applied to a plurality of patients in the same way, the variability may occur depending on the state of each patient or the operation method of each medical staff to perform the operation. Therefore, when applying a specific surgical step for each patient, the actual operation time required to perform this may be different for each patient. Thus, even if the same surgery is performed for each patient, it is difficult to predict the accurate surgery time or the remaining surgery time. In order to solve this problem, the present invention obtains a surgical image including the surgical operations performed in a specific surgical stage, and configured the learning data to perform the learning to more accurately predict the operating time required in each surgical stage Provide a method. In addition, the present invention provides a method for estimating the remaining surgery time based on the surgery image of the current time point obtained during the actual surgery.
도 3은 본 발명의 일 실시예에 따른 수술영상을 기초로 학습데이터를 생성하는 과정을 설명하기 위한 도면이다. 3 is a view for explaining a process of generating learning data based on a surgical image according to an embodiment of the present invention.
도 3을 참조하면, 컴퓨터는 복수의 환자(제1 환자 내지 제n 환자)로부터 각각 기설정된 수술영상을 획득할 수 있다. Referring to FIG. 3, the computer may acquire predetermined surgical images from a plurality of patients (first patient to nth patient), respectively.
이때, 복수의 환자로부터 획득된 각각의 기설정된 수술영상은, 상술한 바와 같이 환자의 상태나 수술을 집도하는 의료진의 수술방식 등에 따라 동일한 수술단계를 수행하였더라도 각 수술영상 내 포함된 영상프레임들은 차이가 있을 수 있다. 또한, 의료진 또는 컴퓨터가 수술영상으로부터 특정 수술단계에 정의된 수술동작들을 정확하게 판단하고, 이에 해당하는 영상프레임들만 추출하는 것도 용이하지 않다. 즉, 복수의 환자로부터 획득된 각각의 기설정된 수술영상은 동일한 수술단계에 대한 영상프레임들로 구성되어 있지만, 이러한 영상프레임들은 해당 수술단계에서 정의되어 있는 수술동작들뿐만 아니라 다른 수술단계(예컨대, 이전 또는 이후 수술단계)에서의 수술동작을 포함하고 있을 수도 있다. 또한, 해당 수술단계에서 정의되어 있는 수술동작들로 구성되었다고 할지라도 수술 시의 여러가지 변동성으로 인해 필수적인 수술동작 이외의 추가적인 수술동작들도 포함하고 있을 수 있다. 이러한 수술영상만으로는 특정 수술단계에서 필요한 수술시간을 파악하기 힘들고, 또한 모든 환자에 대해 일괄적으로 적용하기에 적합하지 않다. In this case, each of the predetermined surgical images obtained from the plurality of patients, even if the same operation steps are performed according to the patient's condition or the operation method of the medical staff instructing the operation as described above, the image frames included in each surgical image is different There can be. In addition, it is not easy for a medical staff or a computer to accurately determine the surgical operations defined in a specific surgical step from the surgical image and extract only the image frames corresponding thereto. That is, each preset surgical image obtained from a plurality of patients is composed of image frames for the same operation stage, but these image frames are not only the operation operations defined in the operation stage but also other operation stages (eg, It may also include a surgical operation (before or after the surgical stage). In addition, even if it is composed of the surgical operations defined in the operation stage, due to various variability in the operation may include additional surgical operations in addition to the essential surgical operation. Such a surgical image alone is difficult to determine the operation time required for a particular surgical step, and also is not suitable for applying to all patients in a batch.
이에, 본 발명에서는 환자로부터 획득된 기설정된 수술영상을 기초로 학습을 위한 학습데이터를 구성하고, 이를 학습하여 해당 수술영상(즉, 해당 수술단계)에서의 수술시간을 정확하게 예측한다. Thus, the present invention configures the learning data for learning based on a predetermined surgical image obtained from the patient, and by learning this to accurately predict the operation time in the corresponding surgical image (ie, the corresponding surgical stage).
일 실시예로, 컴퓨터는 생존분석(survival analysis) 기법을 적용하여 학습데이터를 생성할 수 있다. 즉, 컴퓨터는 생존분석 기법을 적용하여 수술영상을 기초로 수술에 소요되는 시간을 분석하여 이를 바탕으로 학습데이터를 생성할 수 있다. In one embodiment, the computer may generate learning data by applying a survival analysis technique. That is, the computer may apply the survival analysis technique to analyze the time required for the operation based on the surgical image and generate the learning data based on this.
예를 들어, 먼저 컴퓨터는 기설정된 수술영상 및 이로부터 임의의 구간을 제거한 중도절단된 수술영상을 생성할 수 있다(S210). 여기서, 중도절단된 수술영상은 기설정된 수술영상에서 임의의 구간에 해당하는 적어도 하나의 영상프레임을 제거한 영상일 수 있다. 상술한 바와 같이, 기설정된 수술영상에는 다른 수술단계(예컨대, 이전 또는 이후 수술단계)의 영상프레임을 포함하고 있을 수도 있고, 해당 수술단계에서 필수 수술동작이 아닌 추가적인 수술동작을 포함하고 있을 수도 있다. 이와 같이, 기설정된 수술영상은 해당 수술단계의 필수 수술동작들만 포함한 영상이 아닐 수 있기 때문에, 컴퓨터는 기설정된 수술영상으로부터 임의의 구간을 제거한 중도절단된 수술영상을 생성한다. 예컨대, 임의의 구간은 랜덤 함수를 통해 결정될 수 있으며, 또는 수술영상의 앞쪽 또는 뒤쪽 영상프레임을 포함하는 구간을 임의 구간으로 결정할 수도 있다. 또는, 수술영상 내 영상프레임들을 분석하여 일련의 동작과 상이한 수술동작으로 판단되는 영상프레임이 존재하거나, 특정 수술단계에서 수행되어야 할 정해진 수술동작과 상이한 것으로 판단되는 영상프레임이 존재하는 경우, 해당 구간을 임의의 구간으로 결정할 수도 있다. For example, first, the computer may generate a predetermined surgical image and a middle-cut surgical image in which random sections are removed therefrom (S210). Here, the intermediately cut surgical image may be an image from which at least one image frame corresponding to an arbitrary section is removed from the predetermined surgical image. As described above, the predetermined surgery image may include an image frame of another surgery stage (eg, before or after surgery), or may include an additional surgery movement that is not a required surgery in the surgery stage. . As such, since the predetermined surgical image may not be an image including only the essential surgical operations of the corresponding surgical step, the computer generates a middle-cut surgical image in which a predetermined section is removed from the predetermined surgical image. For example, the arbitrary section may be determined through a random function, or the section including the front or rear image frame of the surgical image may be determined as the random section. Alternatively, if there is an image frame that is determined to be a different operation from a series of operations by analyzing the image frames in the surgical image, or if there is an image frame that is determined to be different from a predetermined operation to be performed in a specific operation step, the corresponding section May be determined as an arbitrary interval.
예컨대, 도 3에 도시된 것처럼, 컴퓨터는 제1 내지 제n 환자로부터 각각 획득된 제1 내지 제n 기설정된 수술영상 및 제1 내지 제n 중도절단된 수술영상을 생성할 수 있다. 도 3에서, 중도절단된 수술영상은 점선으로 표시된 박스 내에 포함된 영상프레임들로 표시하였다. For example, as shown in FIG. 3, the computer may generate the first to n-th predetermined surgical images and the first to n-th cutaway surgical images respectively obtained from the first to nth patients. In FIG. 3, the midway cut surgical images are represented by image frames included in a box indicated by a dotted line.
다음으로 컴퓨터는 기설정된 수술영상을 기초로 특정 수술단계를 수행하는데 소요되는 수술소요시간(이하, 제1 수술소요시간)을 획득하고, 중도절단된 수술영상을 기초로 특정 수술단계를 수행하는데 소요되는 수술소요시간(이하, 제2 수술소요시간)을 획득할 수 있다(S220). Next, the computer acquires an operation time required to perform a specific surgery step based on a predetermined surgery image (hereinafter, referred to as a first surgery time), and is required to perform a specific surgery step based on the intermediately cut surgery image. It is possible to obtain the operation time required (hereinafter, the second operation time) (S220).
예컨대, 도 3에 도시된 것처럼, 컴퓨터는 제1 내지 제n 기설정된 수술영상의 제1 수술소요시간(예: 도 3의 Actual Duration) 및 제1 내지 제n 중도절단된 수술영상의 제2 수술소요시간(예: 도 3의 Censored Duration)을 획득하여 비교할 수 있다. 이때, 각 수술소요시간은 각 수술영상 내 영상프레임들을 모두 수행하였을 때 소요되는 시간일 수 있다. 예컨대, 각 수술영상의 재생시간일 수도 있다. For example, as shown in FIG. 3, the computer may include a first surgery time (eg, an actual duration of FIG. 3) of the first to n-th predetermined surgery images, and a second surgery to the first to n-th intermediate surgery images. A time duration (eg, Censored Duration of FIG. 3) may be obtained and compared. In this case, each operation time may be a time required when all the image frames in each surgical image is performed. For example, it may be a playback time of each surgical image.
다음으로 컴퓨터는 제1 수술소요시간 및 제2 수술소요시간을 기초로 기설정된 수술영상 또는/및 중도절단된 수술영상 중 적어도 하나를 이용하여 학습데이터를 생성할 수 있다(S230).Next, the computer may generate learning data using at least one of a predetermined surgery image and / or a mid-cut surgery image based on the first surgery time and the second surgery time (S230).
예컨대, 도 3에 도시된 것처럼, 컴퓨터는 제1 기설정된 수술영상의 제1 수술소요시간(예: 도 3의 Actual Duration) 및 제1 중도절단된 수술영상의 제2 수술소요시간(예: 도 3의 Censored Duration)을 비교하여, 제1 환자에 대한 두 수술영상 중 학습데이터로 사용할 수술영상을 결정할 수 있다. 컴퓨터는 제1 환자로부터 학습데이터로 결정된 수술영상(예컨대, 제1 기설정된 수술영상)과 이에 대한 수술소요시간을 매핑하여 학습데이터로 생성할 수 있다. For example, as shown in FIG. 3, the computer may include a first surgery time of the first predetermined surgery image (eg, Actual Duration of FIG. 3) and a second surgery time of the first censored surgery image (eg, FIG. By comparing the Censored Duration of 3, it is possible to determine the surgical image to be used as learning data of the two surgical images for the first patient. The computer may generate the learning data by mapping the surgery image (eg, the first predetermined surgery image) determined by the learning data from the first patient and the surgery time for the same.
실시예에 따라, 제1 수술소요시간과 제2 수술소요시간을 비교하여 학습데이터로 사용할 수술영상을 결정함에 있어서, 컴퓨터는 수술소요시간의 길이를 이용할 수도 있고, 기설정된 기준을 이용할 수도 있다. 또는 임의로 선정할 수도 있다. 예를 들어, 컴퓨터는 제1 수술소요시간과 제2 수술소요시간 중 짧은 소요시간을 가지는 수술영상을 선택할 수 있다. 또는, 제1 수술소요시간과 제2 수술소요시간의 차이를 이용할 수도 있다. 일례로, 제1 수술소요시간과 제2 수술소요시간의 차이가 소정의 기준값 이상인 경우, 컴퓨터는 제1 수술소요시간을 가지는 수술영상을 학습데이터로 선택할 수 있다. According to an embodiment, in determining a surgical image to be used as learning data by comparing the first surgery time with the second surgery time, the computer may use the length of the surgery time or may use a predetermined criterion. Or you may select arbitrarily. For example, the computer may select a surgical image having a shorter time between the first surgery time and the second surgery time. Alternatively, the difference between the first surgery time and the second surgery time may be used. For example, when the difference between the first surgery time and the second surgery time is more than a predetermined reference value, the computer may select the surgery image having the first surgery time as learning data.
도 3에 도시된 것처럼, 컴퓨터는 제1 내지 제n 기설정된 수술영상의 제1 수술소요시간 및 제1 내지 제n 중도절단된 수술영상의 제2 수술소요시간을 각각 비교한 결과를 기초로, 기설정된 수술영상 또는 중도절단된 수술영상 중 하나를 선택하고, 이에 대응하는 수술소요시간을 매핑하여 특정 수술단계에 대한 학습데이터로 구성할 수 있다. 예컨대, 기설정된 수술영상이 선택된 경우, 컴퓨터는 수술상태(surgical status)를 "event"로 설정하고 선택된 수술영상의 수술소요시간(surgical duration)을 기록하여 학습데이터로 정의할 수 있다. 또는, 중도절단된 수술영상이 선택된 경우, 컴퓨터는 수술상태(surgical status)를 "censored"로 설정하고 선택된 수술영상의 수술소요시간(surgical duration)을 기록하여 학습데이터로 정의할 수 있다. 즉, 컴퓨터는 특정 수술단계에 대한 수술이 진행된 복수의 환자로부터 획득된 기설정된 수술영상 또는 중도절단된 수술영상을 이용하여 학습데이터 집합을 생성할 수 있다. As shown in FIG. 3, the computer is based on a result of comparing the first surgery time of the first to n-th predetermined surgery images and the second surgery time of the first to n-th intermediate cut surgery images, respectively. By selecting one of a predetermined surgical image or a middle-cut surgical image, it may be configured as the learning data for a specific surgical step by mapping the corresponding operation time. For example, when a predetermined surgical image is selected, the computer may set the surgical status as “event” and record the surgical duration of the selected surgical image as learning data. Alternatively, when a mid-severed surgical image is selected, the computer may set the surgical status to "censored" and record the surgical duration of the selected surgical image as learning data. That is, the computer may generate a learning data set by using a predetermined surgical image or a middle-cut surgical image obtained from a plurality of patients undergoing surgery for a specific surgical stage.
다시 도 2를 참조하면, 컴퓨터는 단계 S200에서 생성된 학습데이터를 기초로 학습을 수행하여 특정 수술단계에서의 수술시간을 예측할 수 있다(S300). Referring back to FIG. 2, the computer may predict the operation time in a specific surgery step by performing the learning based on the learning data generated in step S200 (S300).
일 실시예로, 컴퓨터는 학습데이터 내 각 영상프레임과, 각 영상프레임을 기초로 특정 수술단계를 수행하는데 소요되는 수술소요시간을 이용하여 학습을 수행할 수 있다. 컴퓨터는 학습을 통해 특정 수술단계에서의 평균 수술소요시간을 예측할 수 있다. 예를 들어, 기설정된 수술영상을 학습데이터로 생성한 경우, 컴퓨터는 기설정된 수술영상 내 각 영상프레임과, 이에 대한 제1 수술소요시간을 기초로 학습을 수행할 수 있다. 또는, 중도절단된 수술영상을 학습데이터로 생성한 경우, 컴퓨터는 중도절단된 수술영상 내 각 영상프레임과, 이에 대한 제2 수술소요시간을 기초로 학습을 수행할 수 있다. 이에 대한 구체적인 과정은 도 4를 참조하여 설명하도록 한다. In one embodiment, the computer may perform the learning by using each image frame in the learning data and the operation time required to perform a specific surgery step based on each image frame. The computer can learn the average time required for surgery at a particular stage of surgery. For example, when the preset surgical image is generated as learning data, the computer may perform the learning based on each image frame in the preset surgical image and the first surgery time thereof. Alternatively, in the case where the severed surgical image is generated as the learning data, the computer may perform the learning based on each image frame in the severed surgical image and the second surgery time. A detailed process thereof will be described with reference to FIG. 4.
도 4는 본 발명의 일 실시예에 따른 학습데이터를 기초로 학습을 수행하는 과정을 설명하기 위한 도면이다. 4 is a diagram illustrating a process of performing learning based on learning data according to an embodiment of the present invention.
도 4를 참조하면, 컴퓨터는 학습데이터를 획득하고, 이를 딥러닝을 이용하여 학습을 수행할 수 있다. 이때, 학습데이터는 단계 S210 ~ S230 과정을 통해 생성된 것일 수 있으며, 기설정된 수술영상 또는 중도절단된 수술영상을 기초로 생성된 것일 수 있다. Referring to FIG. 4, the computer may acquire learning data and perform the learning using deep learning. In this case, the learning data may be generated through the steps S210 to S230, and may be generated based on a predetermined surgical image or a middle-cut surgical image.
일 실시예로, 컴퓨터는 학습데이터를 입력받아, 학습데이터로부터 각 영상프레임(L1 내지 LN)을 추출할 수 있다(S310). In an embodiment, the computer may receive the training data and extract each image frame L 1 to L N from the training data (S310).
예를 들어, 입력받은 학습데이터가 기설정된 수술영상인 경우, 컴퓨터는 입력받은 학습데이터로부터 기설정된 수술영상 내 포함된 각 영상프레임을 추출할 수 있다. 또는, 입력받은 학습데이터가 중도절단된 수술영상인 경우, 컴퓨터는 입력받은 학습데이터로부터 중도절단된 수술영상 내 포함된 각 영상프레임을 추출할 수 있다.For example, when the input training data is a preset surgical image, the computer may extract each image frame included in the preset surgical image from the input training data. Alternatively, when the received learning data is a middle-cut surgical image, the computer may extract each image frame included in the half-cut surgical image.
컴퓨터는 추출된 각 영상프레임으로부터 CNN(Convolutional Neural Network)을 이용하여 학습을 수행하고, 그 결과로 각 영상프레임에 대한 특징(feature) 정보를 추출할 수 있다(S320). The computer may learn from the extracted image frames by using a convolutional neural network (CNN), and as a result, may extract feature information about each image frame (S320).
예를 들어, 컴퓨터는 적어도 하나의 레이어(예: convolution layer)에 각 영상프레임을 입력하여 수술영상의 특징들을 학습시킬 수 있다. 컴퓨터는 이러한 학습 결과로 각 영상프레임이 무엇을 나타내는 것인지 또는 어떤 것을 대표하는지를 유추할 수 있다. For example, the computer may learn characteristics of the surgical image by inputting each image frame to at least one layer (eg, a convolution layer). As a result of this learning, the computer can infer what each image frame represents or represents.
컴퓨터는 CNN을 이용한 학습 결과로 도출된 각 영상프레임에 대해 RNN(Recurrent neural network)을 이용하여 학습을 수행할 수 있다(S330).The computer may perform learning using a recurrent neural network (RNN) for each image frame derived as a result of learning using the CNN (S330).
예를 들어, 컴퓨터는 학습데이터를 프레임단위로 입력받아 RNN(예컨대, LSTM 방식)을 이용하여 학습을 수행할 수 있다. 이때, 각 영상프레임의 특징 정보를 입력하고, 이와 함께 해당 학습데이터의 수술상태 정보(예컨대, event/ censored 여부) 및 해당 학습데이터에 대한 수술소요시간을 이용하여 학습을 수행할 수 있다. 또한, 컴퓨터는 적어도 하나의 이전시점의 영상프레임을 현재시점의 영상프레임과 연결하면서 학습을 수행할 수 있다. 따라서, 컴퓨터는 각 영상프레임의 특징 정보를 기초로 각 영상프레임 간의 관계를 학습시킴으로써 특정 수술단계에서 미리 정의된 필수 수술동작들인지 또는 다른 수술단계의 수술동작을 포함하고 있는지 등의 특정 수술단계에 대한 정보를 파악할 수 있다. For example, the computer may receive learning data in units of frames and perform learning using an RNN (eg, an LSTM method). At this time, the characteristic information of each image frame may be input, and the learning may be performed using the operation state information (eg, event / censored) of the corresponding learning data and the operation time required for the corresponding learning data. In addition, the computer may perform the learning by connecting the at least one image frame of the previous time point with the image frame of the current time point. Therefore, the computer learns the relationship between each image frame based on the characteristic information of each image frame, so as to determine whether the operation includes a certain operation stage predefined in a specific operation stage or whether the operation includes operation operation of another operation stage. I can grasp the information.
컴퓨터는 상술한 바와 같은 CNN 및 RNN을 이용한 학습 과정을 통해 특정 수술단계에서의 평균 수술소요시간(median surgical duration)을 예측할 수 있다(S340). The computer may predict a median surgical duration at a specific surgical stage through the learning process using the CNN and the RNN as described above (S340).
예를 들어, 컴퓨터는 도 3에서 설명한 바와 같이, 특정 수술단계를 수행한 복수의 환자로부터 각각 수술영상을 획득하여 이를 기초로 학습데이터를 생성하므로, 복수의 환자로부터 각각 생성된 학습데이터에 대해 도 4에서 설명한 것과 같은 학습을 반복적으로 적용할 수 있다. 따라서, 컴퓨터는 다량의 학습데이터를 반복적으로 학습을 수행함으로써, 이로부터 특정 수술단계에서의 평균적인 수술소요시간을 예측할 수 있게 된다. For example, as described above with reference to FIG. 3, the computer acquires a surgical image from each of a plurality of patients who have performed a specific surgical step and generates learning data based thereon. You can apply the same learning repeatedly as described in 4. Therefore, the computer repeatedly learns a large amount of learning data, thereby predicting an average time required for surgery at a specific stage of surgery.
본 발명의 실시예에 따르면, 상술한 단계 S100 ~ S300을 수행하여 최종적으로 수술과정에서 각 수술단계를 수행하는데 소요되는 수술시간을 예측할 수 있다. 이에 따라, 컴퓨터는 각 수술단계의 수술소요시간을 예측하는 학습 모델을 구축할 수 있다. According to an embodiment of the present invention, by performing the above-described step S100 ~ S300 can finally predict the operation time required to perform each surgical step in the surgical procedure. Accordingly, the computer can build a learning model for predicting the operation time of each operation stage.
또한, 본 발명의 일 실시예에서는 상술한 바와 같은 각 수술단계에서 예측된 수술소요시간을 이용하여 다른 환자의 수술과정에 적용할 수 있다. In addition, in one embodiment of the present invention can be applied to the surgical process of the other patient by using the estimated operating time in each operation stage as described above.
일 실시예로, 현재 실제수술을 진행하고 있는 환자가 있을 경우, 컴퓨터는 실시간으로 수술대상 환자의 실제수술영상을 획득할 수 있다. 이때, 컴퓨터는 획득된 실제수술영상으로부터 현재시점의 영상프레임을 추출하고, 추출된 현재시점의 영상프레임을 기초로 현재 수행되고 있는 수술단계를 인식할 수 있다. 따라서, 컴퓨터는 현재시점의 영상프레임을 기초로 현재 수술단계에서 현재시점까지 진행된 수술소요시간을 산출할 수 있다. 또한, 컴퓨터는 상술한 바와 같이 각 수술단계의 수술소요시간을 예측하는 학습을 통해 현재 수술단계에 대한 평균 수술소요시간을 파악할 수 있다. 따라서, 컴퓨터는 학습을 통해 예측된 현재 수술단계의 평균 수술소요시간과, 현재 실제수술이 진행중인 현재시점의 영상프레임을 통해 산출된 현재시점까지의 수술소요시간을 이용하여, 현재시점 이후에 남아 있는 현재 수술단계에서의 수술동작들을 수행하는데 필요한 잔여수술시간을 예측할 수 있다. In one embodiment, if there is a patient currently undergoing a real surgery, the computer may obtain a real surgery image of the patient to be operated in real time. In this case, the computer may extract an image frame of the current point of time from the acquired actual surgical image, and recognize the operation stage currently being performed based on the extracted image frame of the current point of view. Therefore, the computer may calculate the time required for the surgery from the current surgery stage to the current time based on the image frame of the current time. In addition, the computer can grasp the average operation time for the current operation stage through learning to predict the operation time of each operation stage as described above. Therefore, the computer uses the average operation time of the current operation stage predicted through the learning and the operation time until the current time calculated through the image frame of the current time that the actual operation is currently being performed. It is possible to estimate the remaining surgical time required to perform the surgical operations in the current surgical stage.
따라서, 본 발명에서는 각 수술단계에서 필요한 수술소요시간을 예측할 수 있을 뿐만 아니라, 현재시점 이후에 남아 있는 잔여수술시간까지도 정확하게 예측할 수 있다. 이와 같이 정확한 수술소요시간 및 수술잔여시간을 의료진들에게 제공함으로써 현재 수술이 진행되고 있는 상황을 보다 정확히 파악할 수 있고 이를 통해 잔여수술과정을 효율적으로 진행할 수 있다. Therefore, in the present invention, it is possible not only to predict the time required for surgery at each operation stage, but also to accurately predict the remaining time remaining after the present time. In this way, by providing the correct operation time and the remaining operation time to the medical staff can be more accurately understand the current situation of the operation is progressing through it can be efficiently carried out the residual surgery process.
또한, 정확한 수술소요시간 및 수술잔여시간을 알 수 있어야 환자의 상태에 맞추어 수술을 진행할 수 있다. 특히, 수술 시 마취 시간이 중요한데, 본 발명에서는 수술소요시간 및 수술잔여시간을 실시간으로 제공하여 줄 수 있으므로 정확한 마취 투여량을 산출할 수 있다. 나아가, 남은 수술시간에 따라 적정한 추가 마취 투여량 등을 파악하기 효과적이다. In addition, the correct operation time and the remaining time of the operation must be known in order to proceed with the operation of the patient. In particular, the anesthesia time during the operation is important, in the present invention can provide the operation time and the remaining operation time in real time, it is possible to calculate the correct anesthetic dosage. In addition, it is effective to determine the appropriate additional anesthesia dose according to the remaining surgery time.
도 5는 본 발명의 일 실시예에 따른 수술영상을 기초로 수술시간을 예측하는 방법을 수행하는 장치(200)의 구성을 개략적으로 나타내는 도면이다. 5 is a view schematically showing the configuration of an apparatus 200 for performing a method for predicting a surgery time based on a surgery image according to an embodiment of the present invention.
도 5를 참조하면, 프로세서(210)는 하나 이상의 코어(core, 미도시) 및 그래픽 처리부(미도시) 및/또는 다른 구성 요소와 신호를 송수신하는 연결 통로(예를 들어, 버스(bus) 등)를 포함할 수 있다.Referring to FIG. 5, the processor 210 may include a connection passage (eg, a bus or the like) that transmits and receives signals with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
일 실시예에 따른 프로세서(210)는 메모리(220)에 저장된 하나 이상의 인스트럭션을 실행함으로써, 도 2 내지 도 4와 관련하여 설명된 수술영상을 기초로 수술시간을 예측하는 방법을 수행한다.The processor 210 according to an embodiment executes one or more instructions stored in the memory 220 to perform a method of predicting a surgery time based on the surgery image described with reference to FIGS. 2 to 4.
일례로, 프로세서(210)는 메모리(220)에 저장된 하나 이상의 인스트럭션을 실행함으로써 특정 수술단계에 대한 수술동작을 포함하는 기설정된 수술영상을 획득하는 단계, 상기 기설정된 수술영상 및 상기 기설정된 수술영상을 기초로 획득되는 수술소요시간을 이용하여 학습데이터를 생성하는 단계, 및 상기 학습데이터를 기초로 학습을 수행하여 상기 특정 수술단계에서의 수술시간을 예측하는 단계를 수행할 수 있다.For example, the processor 210 may acquire a predetermined surgery image including a surgery operation for a specific surgery stage by executing one or more instructions stored in the memory 220, the preset surgery image and the preset surgery image. Generating the learning data using the operation time obtained based on the step, and performing the learning based on the learning data may be performed to predict the operation time in the specific surgery step.
한편, 프로세서(210)는 프로세서(210) 내부에서 처리되는 신호(또는, 데이터)를 일시적 및/또는 영구적으로 저장하는 램(RAM: Random Access Memory, 미도시) 및 롬(ROM: Read-Only Memory, 미도시)을 더 포함할 수 있다. 또한, 프로세서(210)는 그래픽 처리부, 램 및 롬 중 적어도 하나를 포함하는 시스템온칩(SoC: system on chip) 형태로 구현될 수 있다.On the other hand, the processor 210 is a random access memory (RAM) and a ROM (Read-Only Memory) for temporarily and / or permanently storing signals (or data) processed in the processor 210. , Not shown) may be further included. In addition, the processor 210 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
메모리(220)에는 프로세서(210)의 처리 및 제어를 위한 프로그램들(하나 이상의 인스트럭션들)을 저장할 수 있다. 메모리(220)에 저장된 프로그램들은 기능에 따라 복수 개의 모듈들로 구분될 수 있다.The memory 220 may store programs (one or more instructions) for processing and controlling the processor 210. Programs stored in the memory 220 may be divided into a plurality of modules according to their functions.
이상에서 전술한 본 발명의 일 실시예에 따른 수술영상을 기초로 수술시간을 예측하는 방법은, 하드웨어인 컴퓨터와 결합되어 실행되기 위해 프로그램(또는 어플리케이션)으로 구현되어 매체에 저장될 수 있다.The method of estimating the operation time based on the surgical image according to the embodiment of the present invention described above may be implemented as a program (or an application) to be executed in combination with a computer which is hardware and stored in a medium.
상기 전술한 프로그램은, 상기 컴퓨터가 프로그램을 읽어 들여 프로그램으로 구현된 상기 방법들을 실행시키기 위하여, 상기 컴퓨터의 프로세서(CPU)가 상기 컴퓨터의 장치 인터페이스를 통해 읽힐 수 있는 C, C++, JAVA, 기계어 등의 컴퓨터 언어로 코드화된 코드(Code)를 포함할 수 있다. 이러한 코드는 상기 방법들을 실행하는 필요한 기능들을 정의한 함수 등과 관련된 기능적인 코드(Functional Code)를 포함할 수 있고, 상기 기능들을 상기 컴퓨터의 프로세서가 소정의 절차대로 실행시키는데 필요한 실행 절차 관련 제어 코드를 포함할 수 있다. 또한, 이러한 코드는 상기 기능들을 상기 컴퓨터의 프로세서가 실행시키는데 필요한 추가 정보나 미디어가 상기 컴퓨터의 내부 또는 외부 메모리의 어느 위치(주소 번지)에서 참조되어야 하는지에 대한 메모리 참조관련 코드를 더 포함할 수 있다. 또한, 상기 컴퓨터의 프로세서가 상기 기능들을 실행시키기 위하여 원격(Remote)에 있는 어떠한 다른 컴퓨터나 서버 등과 통신이 필요한 경우, 코드는 상기 컴퓨터의 통신 모듈을 이용하여 원격에 있는 어떠한 다른 컴퓨터나 서버 등과 어떻게 통신해야 하는지, 통신 시 어떠한 정보나 미디어를 송수신해야 하는지 등에 대한 통신 관련 코드를 더 포함할 수 있다.The above-described program includes C, C ++, JAVA, machine language, etc. which can be read by the computer's processor (CPU) through the computer's device interface so that the computer reads the program and executes the methods implemented as the program. Code may be coded in the computer language of. Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedures related to execution procedures necessary for the computer's processor to execute the functions according to a predetermined procedure. can do. In addition, the code may further include memory reference code for additional information or media required for the computer's processor to execute the functions at which location (address address) of the computer's internal or external memory should be referenced. have. Also, if the processor of the computer needs to communicate with any other computer or server remotely in order to execute the functions, the code may be used to communicate with any other computer or server remotely using the communication module of the computer. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
상기 저장되는 매체는, 레지스터, 캐쉬, 메모리 등과 같이 짧은 순간 동안 데이터를 저장하는 매체가 아니라 반영구적으로 데이터를 저장하며, 기기에 의해 판독(reading)이 가능한 매체를 의미한다. 구체적으로는, 상기 저장되는 매체의 예로는 ROM, RAM, CD-ROM, 자기 테이프, 플로피디스크, 광 데이터 저장장치 등이 있지만, 이에 제한되지 않는다. 즉, 상기 프로그램은 상기 컴퓨터가 접속할 수 있는 다양한 서버 상의 다양한 기록매체 또는 사용자의 상기 컴퓨터상의 다양한 기록매체에 저장될 수 있다. 또한, 상기 매체는 네트워크로 연결된 컴퓨터 시스템에 분산되어, 분산방식으로 컴퓨터가 읽을 수 있는 코드가 저장될 수 있다.The stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device. Specifically, examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the computer can access or various recording media on the computer of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
본 발명의 실시예와 관련하여 설명된 방법 또는 알고리즘의 단계들은 하드웨어로 직접 구현되거나, 하드웨어에 의해 실행되는 소프트웨어 모듈로 구현되거나, 또는 이들의 결합에 의해 구현될 수 있다. 소프트웨어 모듈은 RAM(Random Access Memory), ROM(Read Only Memory), EPROM(Erasable Programmable ROM), EEPROM(Electrically Erasable Programmable ROM), 플래시 메모리(Flash Memory), 하드 디스크, 착탈형 디스크, CD-ROM, 또는 본 발명이 속하는 기술 분야에서 잘 알려진 임의의 형태의 컴퓨터 판독가능 기록매체에 상주할 수도 있다.The steps of a method or algorithm described in connection with an embodiment of the present invention may be implemented directly in hardware, in a software module executed by hardware, or by a combination thereof. Software modules may include random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.
이상, 첨부된 도면을 참조로 하여 본 발명의 실시예를 설명하였지만, 본 발명이 속하는 기술분야의 통상의 기술자는 본 발명이 그 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 실시될 수 있다는 것을 이해할 수 있을 것이다. 그러므로, 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며, 제한적이 아닌 것으로 이해해야만 한다.In the above, embodiments of the present invention have been described with reference to the accompanying drawings, but those skilled in the art to which the present invention pertains may realize the present invention in other specific forms without changing the technical spirit or essential features thereof. I can understand that. Therefore, it should be understood that the embodiments described above are exemplary in all respects and not restrictive.
Claims (10)
- 컴퓨터가 수행하는 수술영상을 기초로 수술시간을 예측하는 방법에 있어서,In the method for predicting the operation time based on the operation image performed by the computer,특정 수술단계에 대한 수술동작을 포함하는 기설정된 수술영상을 획득하는 단계;Acquiring a predetermined surgical image including a surgical operation for a specific surgical stage;상기 기설정된 수술영상 및 상기 기설정된 수술영상을 기초로 획득되는 수술소요시간을 이용하여 학습데이터를 생성하는 단계; 및Generating learning data using the surgery time obtained based on the preset surgery image and the preset surgery image; And상기 학습데이터를 기초로 학습을 수행하여 상기 특정 수술단계에서의 수술시간을 예측하는 단계를 포함하는 것을 특징으로 하는 수술영상을 기초로 수술시간을 예측하는 방법. A method of predicting a surgery time based on a surgery image, comprising: predicting a surgery time at the specific surgery stage by performing learning based on the learning data.
- 제1항에 있어서,The method of claim 1,상기 학습데이터를 생성하는 단계는,Generating the learning data,상기 기설정된 수술영상을 기초로 제1 수술소요시간을 획득하는 단계;Acquiring a first surgery time based on the preset surgery image;상기 기설정된 수술영상에서 임의의 구간을 제거한 중도절단된(censored) 수술영상을 생성하여, 상기 중도절단된 수술영상을 기초로 제2 수술소요시간을 획득하는 단계; 및Generating a censored surgical image from which a predetermined section is removed from the preset surgical image, and obtaining a second surgery time based on the censored surgical image; And상기 제1 수술소요시간 및 상기 제2 수술소요시간을 기초로 상기 기설정된 수술영상 및 상기 중도절단된 수술영상 중 적어도 하나를 상기 학습데이터로 생성하는 단계를 포함하는 것을 특징으로 하는 수술영상을 기초로 수술시간을 예측하는 방법. Based on the first surgery time and the second surgery time, generating at least one of the predetermined surgery image and the cut-off surgery image as the learning data, based on the surgery image. How to predict the time of surgery.
- 제2항에 있어서,The method of claim 2,상기 수술시간을 예측하는 단계는,Predicting the operation time,상기 학습데이터 내 각 영상프레임 및 상기 각 영상프레임을 기초로 상기 특정 수술단계를 수행하는데 소요되는 수술소요시간을 이용하여 학습을 수행하는 단계; 및Performing learning using an operation time required to perform the specific surgery step based on each image frame and each image frame in the learning data; And상기 학습을 통해 상기 특정 수술단계에서의 평균 수술소요시간을 예측하는 단계를 포함하는 것을 특징으로 하는 수술영상을 기초로 수술시간을 예측하는 방법. Predicting the operation time based on the surgical image, characterized in that it comprises the step of estimating the average time required for surgery in the specific surgery step through the learning.
- 제3항에 있어서,The method of claim 3,상기 학습을 수행하는 단계는,Performing the learning,상기 기설정된 수술영상을 상기 학습데이터로 생성한 경우, 상기 기설정된 수술영상 내 각 영상프레임 및 상기 제1 수술소요시간을 기초로 학습을 수행하는 것을 특징으로 하는 수술영상을 기초로 수술시간을 예측하는 방법. When the predetermined surgical image is generated as the learning data, the operation time is predicted based on the surgical image, wherein the learning is performed based on each image frame in the predetermined surgical image and the first surgery time. How to.
- 제3항에 있어서,The method of claim 3,상기 학습을 수행하는 단계는,Performing the learning,상기 중도절단된 수술영상을 상기 학습데이터로 생성한 경우, 상기 중도절단된 수술영상 내 각 영상프레임 및 상기 제2 수술소요시간을 기초로 학습을 수행하는 것을 특징으로 하는 수술영상을 기초로 수술시간을 예측하는 방법. In the case where the intermediate-cut surgical image is generated as the learning data, the operation time is performed based on the surgical image, wherein the learning is performed based on each image frame and the second operation time in the intermediate-cut surgical image. How to predict.
- 제3항에 있어서,The method of claim 3,실제수술과정에서 상기 특정 수술단계를 수행하여 실제수술영상을 획득하는 단계를 더 포함하며,The method may further include acquiring an actual surgical image by performing the specific surgery step in the actual surgery process.상기 수술시간을 예측하는 단계는,Predicting the operation time,상기 실제수술영상 내 현재시점의 영상프레임을 기초로 상기 특정 수술단계를 수행하는데 소요된 현재시점까지의 수술소요시간을 획득하는 단계; 및Acquiring operation time required up to the current point in time to perform the specific operation step based on the image frame of the current point in the actual surgery image; And상기 학습을 통해 예측된 상기 특정 수술단계에서의 평균 수술소요시간 및 상기 현재시점까지의 수술소요시간을 기초로 상기 현재시점 이후에 상기 특정 수술단계를 수행하는데 필요한 잔여수술시간을 예측하는 단계를 포함하는 것을 특징으로 하는 수술영상을 기초로 수술시간을 예측하는 방법. Estimating the remaining surgery time required to perform the specific surgery step after the current time on the basis of the average surgery time in the specific surgery step predicted through the learning and the operation time to the present time. Method for predicting the operation time based on the surgical image, characterized in that.
- 제1항에 있어서,The method of claim 1,상기 기설정된 수술영상을 획득하는 단계는,Acquiring the predetermined surgical image,복수의 환자로부터 각각 상기 특정 수술단계를 수행한 기설정된 수술영상을 획득하는 것을 특징으로 하는 수술영상을 기초로 수술시간을 예측하는 방법. Method for predicting the operation time based on the surgical image, characterized in that to obtain a predetermined surgical image performing the specific surgery step from a plurality of patients, respectively.
- 제1항에 있어서,The method of claim 1,상기 특정 수술단계는,The specific surgery stage,수술동작에 따라 계층 구조로 이루어진 수술과정 상에서 특정 계층에 속하는 어느 하나의 수술단계인 것을 특징으로 하는 수술영상을 기초로 수술시간을 예측하는 방법. Method for predicting the operation time based on the surgical image, characterized in that any one of the surgical steps belonging to a specific layer in the surgical process consisting of a hierarchical structure according to the operation.
- 하나 이상의 인스트럭션을 저장하는 메모리; 및Memory for storing one or more instructions; And상기 메모리에 저장된 상기 하나 이상의 인스트럭션을 실행하는 프로세서를 포함하며,A processor for executing the one or more instructions stored in the memory,상기 프로세서는 상기 하나 이상의 인스트럭션을 실행함으로써,The processor executes the one or more instructions,특정 수술단계에 대한 수술동작을 포함하는 기설정된 수술영상을 획득하는 단계;Acquiring a predetermined surgical image including a surgical operation for a specific surgical stage;상기 기설정된 수술영상 및 상기 기설정된 수술영상을 기초로 획득되는 수술소요시간을 이용하여 학습데이터를 생성하는 단계; 및Generating learning data using the surgery time obtained based on the preset surgery image and the preset surgery image; And상기 학습데이터를 기초로 학습을 수행하여 상기 특정 수술단계에서의 수술시간을 예측하는 단계를 수행하는 것을 특징으로 하는 장치. And performing the learning based on the learning data to predict the operation time in the specific surgery step.
- 하드웨어인 컴퓨터와 결합되어, 제1항의 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터프로그램.A computer program, coupled to a computer, which is hardware, stored on a recording medium readable by a computer so as to perform the method of claim 1.
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2018-0019868 | 2018-02-20 | ||
KR20180019867 | 2018-02-20 | ||
KR20180019868 | 2018-02-20 | ||
KR10-2018-0019867 | 2018-02-20 | ||
KR20180019866 | 2018-02-20 | ||
KR10-2018-0019866 | 2018-02-20 | ||
KR1020180145157A KR102013828B1 (en) | 2018-02-20 | 2018-11-22 | Method and apparatus for predicting surgical duration based on surgical video |
KR10-2018-0145157 | 2018-11-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019164273A1 true WO2019164273A1 (en) | 2019-08-29 |
Family
ID=67688215
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/002091 WO2019164273A1 (en) | 2018-02-20 | 2019-02-20 | Method and device for predicting surgery time on basis of surgery image |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019164273A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067957A (en) * | 2022-01-17 | 2022-02-18 | 武汉大学 | Operation time correction method, device, electronic equipment and storage medium |
WO2024088836A1 (en) * | 2022-10-24 | 2024-05-02 | Koninklijke Philips N.V. | Systems and methods for time to target estimation from image characteristics |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006077797A1 (en) * | 2005-01-19 | 2006-07-27 | Olympus Corporation | Surgery data management device, surgery control device, and surgery data processing method |
JP2007122174A (en) * | 2005-10-25 | 2007-05-17 | Olympus Medical Systems Corp | Operation schedule display system |
JP2011224336A (en) * | 2010-03-31 | 2011-11-10 | Sugiura Gijutsushi Jimusho:Kk | Operation process management system and method, and operation process management device |
KR101302595B1 (en) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | System and method for predict to surgery progress step |
KR20180010721A (en) * | 2016-07-22 | 2018-01-31 | 한국전자통신연구원 | System for supporting intelligent surgery and method for using the same |
-
2019
- 2019-02-20 WO PCT/KR2019/002091 patent/WO2019164273A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006077797A1 (en) * | 2005-01-19 | 2006-07-27 | Olympus Corporation | Surgery data management device, surgery control device, and surgery data processing method |
JP2007122174A (en) * | 2005-10-25 | 2007-05-17 | Olympus Medical Systems Corp | Operation schedule display system |
JP2011224336A (en) * | 2010-03-31 | 2011-11-10 | Sugiura Gijutsushi Jimusho:Kk | Operation process management system and method, and operation process management device |
KR101302595B1 (en) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | System and method for predict to surgery progress step |
KR20180010721A (en) * | 2016-07-22 | 2018-01-31 | 한국전자통신연구원 | System for supporting intelligent surgery and method for using the same |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067957A (en) * | 2022-01-17 | 2022-02-18 | 武汉大学 | Operation time correction method, device, electronic equipment and storage medium |
WO2024088836A1 (en) * | 2022-10-24 | 2024-05-02 | Koninklijke Philips N.V. | Systems and methods for time to target estimation from image characteristics |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019132168A1 (en) | System for learning surgical image data | |
KR102014385B1 (en) | Method and apparatus for learning surgical image and recognizing surgical action based on learning | |
US20220286687A1 (en) | Modifying data from a surgical robotic system | |
WO2019132614A1 (en) | Surgical image segmentation method and apparatus | |
WO2021045367A1 (en) | Method and computer program for determining psychological state through drawing process of counseling recipient | |
WO2021006472A1 (en) | Multiple bone density displaying method for establishing implant procedure plan, and image processing device therefor | |
WO2019117563A1 (en) | Integrated predictive analysis apparatus for interactive telehealth and operating method therefor | |
WO2019132165A1 (en) | Method and program for providing feedback on surgical outcome | |
WO2019164273A1 (en) | Method and device for predicting surgery time on basis of surgery image | |
WO2019132244A1 (en) | Method for generating surgical simulation information and program | |
WO2020032562A2 (en) | Bioimage diagnosis system, bioimage diagnosis method, and terminal for executing same | |
JP4900551B2 (en) | Medical information processing device synchronized with standard treatment program | |
WO2016085236A1 (en) | Method and system for automatic determination of thyroid cancer | |
WO2022158843A1 (en) | Method for refining tissue specimen image, and computing system performing same | |
WO2022173232A2 (en) | Method and system for predicting risk of occurrence of lesion | |
WO2019164277A1 (en) | Method and device for evaluating bleeding by using surgical image | |
WO2021054700A1 (en) | Method for providing tooth lesion information, and device using same | |
WO2021201582A1 (en) | Method and device for analyzing causes of skin lesion | |
WO2023136695A1 (en) | Apparatus and method for generating virtual lung model of patient | |
WO2023058942A1 (en) | Device and method for providing oral health analysis service | |
WO2020022825A1 (en) | Method and electronic device for artificial intelligence (ai)-based assistive health sensing in internet of things network | |
WO2020159276A1 (en) | Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image | |
WO2022108387A1 (en) | Method and device for generating clinical record data | |
WO2019164279A1 (en) | Method and apparatus for evaluating recognition level of surgical image | |
WO2022019514A1 (en) | Apparatus, method, and computer-readable recording medium for decision-making in hospital |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19756625 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19756625 Country of ref document: EP Kind code of ref document: A1 |