WO2019132169A1 - Method, apparatus, and program for surgical image playback control - Google Patents

Method, apparatus, and program for surgical image playback control Download PDF

Info

Publication number
WO2019132169A1
WO2019132169A1 PCT/KR2018/010334 KR2018010334W WO2019132169A1 WO 2019132169 A1 WO2019132169 A1 WO 2019132169A1 KR 2018010334 W KR2018010334 W KR 2018010334W WO 2019132169 A1 WO2019132169 A1 WO 2019132169A1
Authority
WO
WIPO (PCT)
Prior art keywords
surgical
image
importance
surgical image
steps
Prior art date
Application number
PCT/KR2018/010334
Other languages
French (fr)
Korean (ko)
Inventor
이종혁
허성환
Original Assignee
(주)휴톰
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)휴톰 filed Critical (주)휴톰
Publication of WO2019132169A1 publication Critical patent/WO2019132169A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/76Manipulators having means for providing feel, e.g. force or tactile feedback
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T3/14
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body

Definitions

  • the present invention relates to a surgical video image reproduction control method, apparatus and program.
  • Deep learning is defined as a set of machine learning algorithms that try to achieve high levels of abstraction (a task that summarizes key content or functions in large amounts of data or complex data) through a combination of several nonlinear transformation techniques. Deep learning can be viewed as a field of machine learning that teaches computers how people think in a big way.
  • the present invention provides a method, an apparatus, and a program for controlling a surgical image reproduction.
  • a method of controlling an operation of regenerating a surgical image comprising: obtaining a surgical image by a computer; acquiring information obtained by dividing the surgical image into one or more regions; Obtaining information on a surgical stage corresponding to each of the areas, determining importance of each of the surgical steps, and controlling reproduction of the surgical image based on the determined importance.
  • the step of acquiring the information on the operation step may include the step of determining the type of the operation, and the step of determining the importance may include determining the importance of each of the operation steps based on the type of the operation Step < / RTI >
  • the step of controlling the regeneration of the surgical image may include determining whether to reproduce the surgical image corresponding to each of the surgical steps and the summary level based on the importance of each of the surgical steps.
  • the step of controlling the regeneration of the surgical image may further include the steps of: determining a division level of each of the surgical steps based on the importance of each of the surgical steps; dividing each of the surgical steps according to the determined division level; And regenerating the surgical image corresponding to each of the divided surgical steps and determining whether to reproduce the surgical image corresponding to each of the divided surgical steps and the summary level based on the divided level of each of the divided surgical steps .
  • the step of acquiring the divided information may include acquiring information obtained by hierarchically dividing the surgical image into one or more classification units, and the step of acquiring information on the surgical step may include: Searching for information that is divided into hierarchically divided information from an upper layer, and acquiring information about a surgical stage corresponding to each of the hierarchically divided information.
  • the step of determining the importance may include determining a degree of importance of a surgical stage corresponding to each of the hierarchically divided information
  • the step of controlling the reproduction of the surgical image may include: Determining whether or not hierarchical division of each of the operation steps is performed based on the importance levels, and determining whether to reproduce operation images corresponding to each of the hierarchically divided operation steps based on importance of each of the hierarchically divided operation steps, And determining a summary level.
  • the method may further include recognizing at least one event included in the surgical image and reproducing the surgical image corresponding to the event.
  • the step of reproducing the surgical image corresponding to the event may include the step of determining the importance of the recognized event and the step of controlling the reproduction of the surgical image corresponding to the event based on the importance of the event have.
  • a learning data management apparatus including a memory for storing one or more instructions and a processor for executing the one or more instructions stored in the memory, The method comprising the steps of: obtaining a surgical image by a computer; obtaining information obtained by dividing the surgical image into one or more regions; obtaining information about a surgical stage corresponding to each of the divided regions; Determining the importance of each of the surgical steps, and controlling the reproduction of the surgical image based on the determined importance.
  • a computer program stored in a recording medium readable by a computer, the computer program being capable of performing a surgical video playback control method in combination with a hardware computer.
  • FIG. 1 is a simplified schematic diagram of a system capable of performing robotic surgery in accordance with the disclosed embodiments.
  • FIG. 2 is a flowchart illustrating a method of controlling reproduction of a surgical image according to an embodiment.
  • FIG. 3 is a diagram showing an example of a method of hierarchically dividing and recognizing a surgical operation.
  • FIG. 4 is a view for explaining a method of determining a division level of each surgical stage and reproducing a surgical image according to an embodiment.
  • FIG. 5 is a view for explaining a method of controlling reproduction of a surgical image including an event according to an embodiment.
  • FIG. 6 is a configuration diagram of an apparatus according to an embodiment.
  • the term “part” or “module” refers to a hardware component, such as a software, FPGA, or ASIC, and a “component” or “module” performs certain roles. However, “part” or “ module “ is not meant to be limited to software or hardware. A “module “ or “ module “ may be configured to reside on an addressable storage medium and configured to play back one or more processors. Thus, by way of example, “a” or " module " is intended to encompass all types of elements, such as software components, object oriented software components, class components and task components, Microcode, circuitry, data, databases, data structures, tables, arrays, and variables, as used herein. Or " modules " may be combined with a smaller number of components and "parts " or " modules " Can be further separated.
  • FIG. 1 is a simplified schematic diagram of a system capable of performing robotic surgery in accordance with the disclosed embodiments.
  • the robotic surgery system includes a medical imaging apparatus 10, a server 20, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
  • the medical imaging equipment 10 may be omitted from the robotic surgery system according to the disclosed embodiment.
  • the surgical robot 34 includes a photographing device 36 and a surgical tool 38.
  • robotic surgery is performed by the user controlling the surgical robot 34 using the control unit 30.
  • robot surgery may be performed automatically by the control unit 30 without user control.
  • the server 20 is a computing device including at least one processor and a communication unit.
  • the control unit 30 includes a computing device including at least one processor and a communication unit. In one embodiment, the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
  • the photographing apparatus 36 includes at least one image sensor. That is, the photographing device 36 includes at least one camera device and is used to photograph a target object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
  • the image photographed at the photographing device 36 is displayed on the display 340.
  • the surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, anchoring, grabbing, etc., of the surgical site.
  • the surgical tool 38 is used in combination with the surgical arm of the surgical robot 34.
  • the control unit 30 receives information necessary for surgery from the server 20, or generates information necessary for surgery and provides the information to the user. For example, the control unit 30 displays on the display 32 information necessary for surgery, which is generated or received.
  • the user operates the control unit 30 while viewing the display 32 to perform the robot surgery by controlling the movement of the surgical robot 34.
  • the server 20 generates information necessary for the robot surgery using the medical image data of the object photographed previously from the medical imaging apparatus 10 and provides the generated information to the control unit 30.
  • the control unit 30 provides the information received from the server 20 to the user by displaying the information on the display 32 or controls the surgical robot 34 using the information received from the server 20.
  • the means that can be used in the medical imaging equipment 10 is not limited, and various other medical imaging acquiring means such as CT, X-Ray, PET, MRI and the like may be used.
  • the surgical image obtained in the photographing device 36 is transmitted to the control section 30.
  • control unit 30 may segment the surgical image obtained during the operation in real time.
  • control unit 30 transmits a surgical image to the server 20 during or after surgery.
  • the server 20 can divide and analyze the surgical image.
  • the server 20 learns and stores at least one model for dividing and analyzing the surgical image. In addition, the server 20 learns and stores at least one model for generating an optimized surgical process.
  • the server 20 or the client can display the obtained surgical image.
  • the surgical image is generally very long, it is practically difficult for the researcher or the doctor to review the surgical image.
  • FIG. 2 is a flowchart illustrating a method of controlling reproduction of a surgical image according to an embodiment.
  • steps that may be performed on the server 20 or client shown in FIG. 1 are shown in a time-series manner.
  • the computer performs the steps shown in FIG. 2 for convenience of explanation.
  • all or some of the steps shown in FIG. 2 may be performed in the server 20 or the client, respectively.
  • step S110 the computer acquires a surgical image.
  • the surgical image may be a surgical image actually performed by the surgical robot 34, or may be a simulation image performed based on the image obtained from the medical imaging apparatus 10.
  • the surgical image may be an image according to the optimized surgical method, which is generated based on the image obtained from the medical imaging apparatus 10.
  • the surgical image in the present specification may mean an image in which a surgical procedure is actually photographed, or a 3D modeling image generated based on a medical image obtained from the medical image photographing apparatus 10.
  • the type of surgical image referred to in the present specification is not limited, and may be understood as meaning including all types of images including at least a part of the surgical procedure.
  • the computer obtains information obtained by segmenting the surgical image obtained in step S110 into one or more areas.
  • the surgical image is automatically segmented by the computer.
  • the surgical image may be segmented by various criteria.
  • a surgical image may be segmented based on the type of object included in the image.
  • the division method based on the kind of object requires a step in which the computer recognizes each object.
  • the object recognized in the surgical image includes the human body, the object introduced from the outside, and the object generated by itself.
  • the human body includes a body part taken by medical imaging (e.g., CT) followed by surgery and a body part not taken.
  • medical imaging e.g., CT
  • a body part photographed by a medical imaging includes an organ, a blood vessel, a bone, a tendon, etc., and such a body part can be recognized based on a 3D modeling image generated based on a medical image.
  • the position, size, and shape of each body part are recognized in advance by a 3D analysis method based on a medical image.
  • the computer defines an algorithm that can grasp the position of a body part corresponding to a surgical image in real time, and based on the information, information on the position, size, and shape of each body part included in the surgical image Can be obtained.
  • body parts not taken by medical imaging include omentum, which is not captured by medical images, so it is necessary to recognize them in real time during surgery.
  • the computer can determine the location and size of the omentum using image recognition methods, and predict the location of the vessel in the presence of blood vessels within the omentum.
  • Objects introduced from the outside include, for example, surgical tools, gauzes, clips, and the like. Since it has predetermined morphological characteristics, it can recognize the computer in real time through image analysis during surgery.
  • Internally generated objects include, for example, bleeding from the body part. This allows the computer to recognize images in real time during image analysis during surgery.
  • the movements of organs and organs included in body parts and the causes of objects are all caused by the movement of objects from outside.
  • a surgical image can be segmented based on the motion of each object.
  • the surgical image may be segmented based on the motion, i.e., action, of the externally introduced object.
  • the computer judges the type of each object recognized in the surgical image.
  • the computer determines a motion of each object, that is, a motion of each object based on a predetermined operation defined in advance, a series of operations, The action can be recognized.
  • the computer recognizes the type of each action, and also recognizes the cause of each action.
  • the computer can divide the surgical image based on the recognized action, and it can recognize from each detailed operation to the type of the whole operation through the stepwise division.
  • the computer determines the type of predefined operation corresponding to the surgical image from the judgment of the action.
  • the type of surgery information about the entire surgical procedure can be obtained. If there are multiple surgical processes for the same kind of surgery, one surgical process may be selected based on the doctor's choice, or based on actions recognized up to a certain point in time.
  • the computer can recognize and predict the surgical stage based on the acquired surgical procedure. For example, if a particular step in a series of surgical procedures is recognized, then the steps following it can be predicted or candidates for possible steps can be culled. Therefore, it is possible to greatly reduce the error rate of the surgical image recognition caused by the occlusion or the like. Further, when the surgical image deviates greatly from the predictable surgical stage by more than a predetermined error range, it may be recognized that a surgical error situation has occurred.
  • the computer can make a judgment on each action based on the recognition of each action. For example, a computer can recognize the necessity and effectiveness of each action.
  • the computer can make a judgment as to whether each action was necessary or unnecessary.
  • the computer can determine whether each action was performed efficiently, even if each action was required. This is used to provide an operative report, eliminate unnecessary operations in the surgical procedure, and streamline inefficient operations.
  • the surgical image is largely divided into components including body parts (organ and omentum), objects introduced from the outside, objects generated inside, actions, types of surgery, necessity and efficiency of each action . That is, instead of recognizing the surgical image as a whole, the computer divides the surgical image into a component unit that minimizes mutual overlapping, including all elements of the surgical image as much as possible. Based on the divided component units, By recognizing, the surgical image can be recognized more specifically and more easily.
  • the computer may divide the surgical image hierarchically (or stepwise).
  • the computer divides the surgical image hierarchically (or stepwise) into one or more classification units, and recognizes the operations corresponding to each of the divided classification units hierarchically (or stepwise).
  • the computer may sequentially recognize the operations of the first classification unit, the second classification unit, the third classification unit, and the fourth classification unit included in the surgical image.
  • the first classification unit is a component unit
  • the second classification unit is a subsegment unit
  • the third classification unit is a segmentation unit
  • the fourth classification unit is a subsegmentation unit, but it is not limited thereto.
  • FIG. 3 an example of a method of hierarchically dividing and recognizing a surgical operation is shown.
  • the surgical operation is divided into a first classification unit 210 and a second classification unit 220, a third classification unit 230, and a fourth classification unit 240, And a method of recognizing the divided data is schematically shown.
  • each code shown in FIG. 3 may refer to a pre-established code that can identify actions included in each classification unit.
  • the operation of the first classification unit may include capturing, cutting, moving, etc.
  • the operation of the second classification unit may include vascularization, fat removal, Long term resection, long term resection, suture and the like
  • the operation of the fourth classification unit may include gastric cancer surgery.
  • each operation of gastric cancer surgery can largely include laparotomy, gastrectomy, long-term connection and suture, and each operation is more concretely a vasectomy, And the connection of some parts of other organs and the like, and each operation can be more specifically embodied by cutting the blood vessels, removing obstacles such as fat, etc., and this can be accomplished by more simple operations such as movement, Can be further specified.
  • the hierarchy can be used in reverse, and the operation can be divided into the minimum detail unit, and the computer can be learned to recognize the upper operation step by step using the divided result.
  • the surgical site is different for each patient, each disease differs in shape, and the operation patterns are different depending on the type of operation.
  • the learning model it is possible to provide a surgical motion recognition model that can be applied to the patient regardless of the physical condition of the patient or the type of surgery, and, if necessary, Or may provide a surgical motion recognition model that is tailored to the type of condition or surgery.
  • the computer can recognize an event that occurs in a surgical image.
  • the event includes a surgical error situation, such as bleeding.
  • the computer can recognize this through image processing of the surgical image.
  • the computer may divide the surgical image into one or more event groups including recognized events.
  • the divided event groups may be managed separately, included in a classification unit according to the disclosed embodiment, or may be utilized as an independent classification unit for analysis of a surgical operation.
  • the computer can determine the cause of the event based on the recognized event and the surgical operation before and after the event was recognized.
  • the computer may generate learning data for analyzing the cause of the event by storing the operations of the predetermined classification unit before and after the occurrence of the event together with information on the event.
  • the computer can perform learning using the generated learning data, and learn the correlation between the operation and the events of each classification unit.
  • the computer can determine the cause of the event occurrence and provide feedback to the user.
  • the computer may perform learning for optimization of surgical operations based on operation of a given classification unit.
  • the computer can learn an optimized sequence and method for performing the operation of each classification unit according to the physical condition of the patient and the type of surgery.
  • the computer may perform learning for optimization of the surgical operation based on the operation of the first classification unit.
  • the computer may obtain one or more reference surgery information.
  • the computer can perform learning based on the order of the operation operations included in the one or more reference operation information and determine the order of the optimized operation operations for each operation according to the learning results.
  • the operation of the first classification unit is a minimum unit operation commonly applied in any operation, when learning is performed based on the first classification unit, the order of the optimized operation operations regardless of the type of operation and the body condition of the patient Can be obtained. Likewise, it is also possible to obtain an optimized learning model according to the type of surgery and the patient's physical condition through fine adjustment to the learned model.
  • step S130 the computer obtains information on the surgical steps corresponding to each of the one or more areas divided in step S120.
  • the computer determines the importance of each surgical step (step S140), and controls the reproduction of the surgical image based on the determined importance (step S150).
  • the computer determines the type of operation and determines the importance of each surgical stage based on the type of operation that was determined. For example, certain surgical steps may be important for certain operations, but may be less important for other operations. Accordingly, the computer can determine the type of operation from the surgical image according to the above-described method, and determine the importance of each surgical step in which the surgical image is divided based on the determined type of operation.
  • the computer determines whether or not to reproduce the surgical image corresponding to each surgical stage and the summarization level, based on the importance of each of the surgical stages. For example, a surgical stage with a relatively low importance may not be reproduced or may be reproduced in large numbers. Likewise, relatively more important surgical steps may not be summarized, or may be reproduced with less summary.
  • the computer determines the segmentation level of each surgical step based on the determined importance for each surgical step.
  • the computer divides each of the surgical steps according to the determined division level and regenerates the surgical image corresponding to each of the divided surgical steps so that the surgical image corresponding to each of the divided surgical steps, And the level of summarization.
  • FIG. 4 is a view for explaining a method of determining a division level of each surgical stage and reproducing a surgical image according to an embodiment.
  • a tree 300 corresponding to an example of hierarchically dividing a surgical image is shown.
  • the tree 300 shown in FIG. 4 is shown as a binary tree, a data structure for hierarchically dividing an operation image is not limited to a binary tree, and may have a higher-order tree structure, .
  • FIG. 4 for convenience of explanation, it is assumed that information on the surgical images divided hierarchically as shown in FIG. 3 is stored in a tree form as shown in FIG.
  • the root node 310 of the tree 300 may correspond to a fourth classification unit, i.e., the type of surgery.
  • the child nodes 320 and 330 of the root node 310 correspond to the third classification unit, the child nodes thereof correspond to the second classification unit, and the child nodes correspond to the first classification unit. But is not limited to.
  • the computer can determine the type of surgery at the root node 310 and determine the importance of the surgical stage corresponding to each of the child nodes 320 and 330 below.
  • node 320 may correspond to a fat removal operation
  • node 330 may correspond to a gastrostomy operation.
  • the computer may play a summarized surgical image, or skip playback, without further segmentation of the surgical stage corresponding to the node 320 corresponding to a relatively less important fat removal operation.
  • the computer may further divide the surgical stage corresponding to node 330 and determine the importance of the surgical stage corresponding to each of the segmented nodes 340 and 350.
  • the node 350 may stop further partitioning in response to the relatively insignificant surgical stage, and the surgical operation corresponding to the node 350 may be summarized and regenerated.
  • the summary level for node 350 may be lower than the summary level for node 320. [ That is, the surgical image corresponding to the node 320 may be displayed more than the surgical image corresponding to the node 350.
  • the node 340 may be further divided in correspondence to a relatively important surgical stage, and the importance of the surgical stage corresponding to each of the nodes 360 and 370 may be determined.
  • the computer displays the surgical image corresponding to the node 360 longer (or less summarized) than the surgical image corresponding to the node 370 can do.
  • FIG. 5 is a view for explaining a method of controlling reproduction of a surgical image including an event according to an embodiment.
  • an event may occur in the surgical image regardless of the importance of each surgical step.
  • an event may include bleeding or a surgical error situation.
  • the node 410 included in the tree 400 corresponds to the fat removal operation of which the relative importance is low, but in the operation stage corresponding to the node 420, which is one of the child nodes of the node 410, (E. G., Bleeding) may occur.
  • the computer may summarize and display the surgical image corresponding to the node 410, but may not summarize the surgical image corresponding to the node 420 corresponding to the surgical stage in which the event occurred, have.
  • a surgical image corresponding to node 420 may be reproduced to occupy a substantial portion of the summarized surgical image corresponding to node 410.
  • the computer can determine the importance of each recognized event.
  • the computer can control the reproduction of the surgical image corresponding to the event based on the determined degree of importance. For example, the computer may determine whether to reproduce the surgical image corresponding to the event and the summary level, and may reproduce the surgical image corresponding to the event according to the determined replayability and summary level.
  • the surgical image regeneration control method according to the disclosed embodiment can be applied to a situation in which a surgical image is divided into one or more stages, and each stage is grouped into one or more groups.
  • the computer determines the importance of each of the grouped groups, and the surgical images corresponding to the group of relatively low importance may be omitted or many summaries can be summarized. Similarly, surgical images corresponding to relatively high importance groups may not be summarized or may be summarized less.
  • the computer may obtain information about the time to reproduce the surgical image and determine the level of abstraction for each surgical image such that all of the surgical images may be reproduced within the acquired time. For example, if a surgical image needs to be reproduced in a relatively short time, it may be omitted and further summarized depending on the importance.
  • the computer may obtain information about a surgical step desired to be viewed from a user. In this case, the computer does not summarize the surgical steps corresponding to the acquired information, or can reproduce with less summary.
  • the computer can search the data structure in real time, determine the importance for each step, and determine whether to play and summary levels.
  • the computer may determine that it is less important for a particular surgical step and may decide to omit or summarize it a lot.
  • the computer may further divide the surgical stage to display the surgical stage in more detail.
  • FIG. 6 is a configuration diagram of an apparatus 100 according to an embodiment.
  • the processor 102 may include one or more cores (not shown) and a connection path (e.g., a bus, etc.) to transmit and receive signals to and / or from a graphics processing unit (not shown) .
  • a connection path e.g., a bus, etc.
  • the processor 102 in accordance with one embodiment performs one or more instructions stored in the memory 104 to perform the training data management method described with reference to Figures 1-8.
  • the processor 102 may obtain an operation image by executing one or more instructions stored in a memory, obtain information obtained by dividing the operation image into one or more regions, Acquiring information about the step, determining importance for each of the operation steps, and controlling reproduction of the operation image based on the determined importance.
  • the processor 102 may include a random access memory (RAM) (not shown) and a read-only memory (ROM) for temporarily and / or permanently storing signals (or data) , Not shown).
  • the processor 102 may be implemented as a system-on-chip (SoC) including at least one of a graphics processing unit, a RAM, and a ROM.
  • SoC system-on-chip
  • the memory 104 may store programs (one or more instructions) for processing and control of the processor 102. Programs stored in the memory 104 may be divided into a plurality of modules according to functions.
  • the learning data management method can be implemented as a program (or an application) to be executed in combination with a computer, which is hardware, and can be stored in a medium.
  • the above-described program may be stored in a computer-readable medium such as C, C ++, JAVA, machine language, or the like that can be read by the processor (CPU) of the computer through the device interface of the computer, And may include a code encoded in a computer language of the computer.
  • code may include a functional code related to a function or the like that defines necessary functions for executing the above methods, and includes a control code related to an execution procedure necessary for the processor of the computer to execute the functions in a predetermined procedure can do.
  • code may further include memory reference related code as to whether the additional information or media needed to cause the processor of the computer to execute the functions should be referred to at any location (address) of the internal or external memory of the computer have.
  • the code may be communicated to any other computer or server remotely using the communication module of the computer
  • a communication-related code for determining whether to communicate, what information or media should be transmitted or received during communication, and the like.
  • the medium to be stored is not a medium for storing data for a short time such as a register, a cache, a memory, etc., but means a medium that semi-permanently stores data and is capable of being read by a device.
  • examples of the medium to be stored include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, but are not limited thereto.
  • the program may be stored in various recording media on various servers to which the computer can access, or on various recording media on the user's computer.
  • the medium may be distributed to a network-connected computer system so that computer-readable codes may be stored in a distributed manner.
  • the steps of a method or algorithm described in connection with the embodiments of the present invention may be embodied directly in hardware, in software modules executed in hardware, or in a combination of both.
  • the software module may be a random access memory (RAM), a read only memory (ROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a flash memory, a hard disk, a removable disk, a CD- May reside in any form of computer readable recording medium known in the art to which the invention pertains.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Robotics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • General Business, Economics & Management (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Gynecology & Obstetrics (AREA)
  • Manipulator (AREA)
  • Image Generation (AREA)

Abstract

Disclosed is a method for surgical image playback control, comprising the steps of: acquiring, by a computer, a surgical image; obtaining information in which the surgical image is divided into one or more regions; obtaining information on surgical stages corresponding to the divided one or more regions, respectively; determining a level of importance for each of the surgical stages; and controlling playback of the surgical image on the basis of the determined importance level.

Description

수술영상 재생제어 방법, 장치 및 프로그램Surgical image reproduction control method, apparatus and program
본 발명은 수술영상 재생제어 방법, 장치 및 프로그램에 관한 것이다. The present invention relates to a surgical video image reproduction control method, apparatus and program.
수술과정에서, 의사의 수술을 보조하기 위한 정보를 제공할 수 있는 기술들의 개발이 요구되고 있다. 수술을 보조하기 위한 정보를 제공하기 위해서는, 수술행위를 인식할 수 있어야 한다.In the surgical procedure, development of techniques capable of providing information for assisting the surgeon of the doctor is required. In order to provide information for assisting the operation, it is necessary to be able to recognize the operation.
따라서, 컴퓨터가 수술영상으로부터 수술행위를 인식하고, 수술영상을 재생하며, 수술영상에 대응하는 보조정보를 제공할 수 있는 기술의 개발이 요구된다.Therefore, it is required to develop a technique for a computer to recognize a surgical operation from a surgical image, reproduce an operation image, and provide auxiliary information corresponding to an operation image.
또한, 최근에는 의료영상의 분석에 딥 러닝이 널리 이용되고 있다. 딥 러닝은 여러 비선형 변환기법의 조합을 통해 높은 수준의 추상화(abstractions, 다량의 데이터나 복잡한 자료들 속에서 핵심적인 내용 또는 기능을 요약하는 작업)를 시도하는 기계학습 알고리즘의 집합으로 정의된다. 딥 러닝은 큰 틀에서 사람의 사고방식을 컴퓨터에게 가르치는 기계학습의 한 분야로 볼 수 있다.In recent years, deep learning has been widely used in the analysis of medical images. Deep learning is defined as a set of machine learning algorithms that try to achieve high levels of abstraction (a task that summarizes key content or functions in large amounts of data or complex data) through a combination of several nonlinear transformation techniques. Deep learning can be viewed as a field of machine learning that teaches computers how people think in a big way.
본 발명이 해결하고자 하는 과제는 수술영상 재생제어 방법, 장치 및 프로그램을 제공하는 것이다.SUMMARY OF THE INVENTION [0006] The present invention provides a method, an apparatus, and a program for controlling a surgical image reproduction.
본 발명이 해결하고자 하는 과제들은 이상에서 언급된 과제로 제한되지 않으며, 언급되지 않은 또 다른 과제들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The problems to be solved by the present invention are not limited to the above-mentioned problems, and other problems which are not mentioned can be clearly understood by those skilled in the art from the following description.
상술한 과제를 해결하기 위한 본 발명의 일 면에 따른 수술영상 재생제어 방법은, 컴퓨터가 수술영상을 획득하는 단계, 상기 수술영상을 하나 이상의 영역으로 분할한 정보를 획득하는 단계, 상기 분할된 하나 이상의 영역 각각에 대응하는 수술단계에 대한 정보를 획득하는 단계, 상기 수술단계 각각에 대한 중요도를 판단하는 단계 및 상기 판단된 중요도에 기초하여 상기 수술영상의 재생을 제어하는 단계를 포함한다. According to an aspect of the present invention, there is provided a method of controlling an operation of regenerating a surgical image, the method comprising: obtaining a surgical image by a computer; acquiring information obtained by dividing the surgical image into one or more regions; Obtaining information on a surgical stage corresponding to each of the areas, determining importance of each of the surgical steps, and controlling reproduction of the surgical image based on the determined importance.
또한, 상기 수술단계에 대한 정보를 획득하는 단계는, 상기 수술의 종류를 판단하는 단계를 포함하고, 상기 중요도를 판단하는 단계는, 상기 수술의 종류에 기초하여 상기 수술단계 각각의 중요도를 판단하는 단계를 포함할 수 있다.The step of acquiring the information on the operation step may include the step of determining the type of the operation, and the step of determining the importance may include determining the importance of each of the operation steps based on the type of the operation Step < / RTI >
또한, 상기 수술영상의 재생을 제어하는 단계는, 상기 수술단계 각각의 중요도에 기초하여 상기 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정하는 단계를 포함할 수 있다.The step of controlling the regeneration of the surgical image may include determining whether to reproduce the surgical image corresponding to each of the surgical steps and the summary level based on the importance of each of the surgical steps.
또한, 상기 수술영상의 재생을 제어하는 단계는, 상기 수술단계 각각의 중요도에 기초하여 상기 수술단계 각각의 분할수준을 결정하는 단계, 상기 결정된 분할수준에 따라 상기 수술단계 각각을 분할하는 단계 및 상기 분할된 수술단계 각각에 대응하는 수술영상을 재생하되, 상기 분할된 수술단계 각각의 분할수준에 기초하여 상기 분할된 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정하는 단계를 포함할 수 있다.The step of controlling the regeneration of the surgical image may further include the steps of: determining a division level of each of the surgical steps based on the importance of each of the surgical steps; dividing each of the surgical steps according to the determined division level; And regenerating the surgical image corresponding to each of the divided surgical steps and determining whether to reproduce the surgical image corresponding to each of the divided surgical steps and the summary level based on the divided level of each of the divided surgical steps .
또한, 상기 분할한 정보를 획득하는 단계는, 상기 수술영상을 하나 이상의 분류단위로 계층적으로 분할한 정보를 획득하는 단계를 포함하고, 상기 수술단계에 대한 정보를 획득하는 단계는, 상기 계층적으로 분할한 정보를 탐색하되, 상위 계층으로부터 상기 계층적으로 분할한 정보를 탐색하는, 단계 및 상기 계층적으로 분할한 정보 각각에 대응하는 수술단계에 대한 정보를 획득하는 단계를 포함할 수 있다.The step of acquiring the divided information may include acquiring information obtained by hierarchically dividing the surgical image into one or more classification units, and the step of acquiring information on the surgical step may include: Searching for information that is divided into hierarchically divided information from an upper layer, and acquiring information about a surgical stage corresponding to each of the hierarchically divided information.
또한, 상기 중요도를 판단하는 단계는, 상기 계층적으로 분할한 정보 각각에 대응하는 수술단계에 대한 중요도를 판단하는 단계를 포함하고, 상기 수술영상의 재생을 제어하는 단계는, 상기 수술단계 각각의 중요도에 기초하여 상기 수술단계 각각의 계층적 분할여부를 결정하는 단계 및 상기 계층적으로 분할된 수술단계 각각의 중요도에 기초하여 상기 계층적으로 분할된 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정하는 단계를 포함할 수 있다.The step of determining the importance may include determining a degree of importance of a surgical stage corresponding to each of the hierarchically divided information, and the step of controlling the reproduction of the surgical image may include: Determining whether or not hierarchical division of each of the operation steps is performed based on the importance levels, and determining whether to reproduce operation images corresponding to each of the hierarchically divided operation steps based on importance of each of the hierarchically divided operation steps, And determining a summary level.
또한, 상기 수술영상에 포함된 적어도 하나의 이벤트를 인식하는 단계 및 상기 이벤트에 대응하는 수술영상을 재생하는 단계를 더 포함할 수 있다.The method may further include recognizing at least one event included in the surgical image and reproducing the surgical image corresponding to the event.
또한, 상기 이벤트에 대응하는 수술영상을 재생하는 단계는, 상기 인식된 이벤트의 중요도를 판단하는 단계 및 상기 이벤트의 중요도에 기초하여 상기 이벤트에 대응하는 수술영상의 재생을 제어하는 단계를 포함할 수 있다.In addition, the step of reproducing the surgical image corresponding to the event may include the step of determining the importance of the recognized event and the step of controlling the reproduction of the surgical image corresponding to the event based on the importance of the event have.
상술한 과제를 해결하기 위한 본 발명의 일 면에 따른 학습용 데이터 관리장치는, 하나 이상의 인스트럭션을 저장하는 메모리 및 상기 메모리에 저장된 상기 하나 이상의 인스트럭션을 실행하는 프로세서를 포함하고, 상기 프로세서는 상기 하나 이상의 인스트럭션을 실행함으로써, 컴퓨터가 수술영상을 획득하는 단계, 상기 수술영상을 하나 이상의 영역으로 분할한 정보를 획득하는 단계, 상기 분할된 하나 이상의 영역 각각에 대응하는 수술단계에 대한 정보를 획득하는 단계, 상기 수술단계 각각에 대한 중요도를 판단하는 단계 및 상기 판단된 중요도에 기초하여 상기 수술영상의 재생을 제어하는 단계를 수행한다.According to an aspect of the present invention, there is provided a learning data management apparatus including a memory for storing one or more instructions and a processor for executing the one or more instructions stored in the memory, The method comprising the steps of: obtaining a surgical image by a computer; obtaining information obtained by dividing the surgical image into one or more regions; obtaining information about a surgical stage corresponding to each of the divided regions; Determining the importance of each of the surgical steps, and controlling the reproduction of the surgical image based on the determined importance.
상술한 과제를 해결하기 위한 본 발명의 일 면에 따라 하드웨어인 컴퓨터와 결합되어, 개시된 실시 예에 따른 수술영상 재생제어 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터프로그램이 제공된다.According to an aspect of the present invention, there is provided a computer program stored in a recording medium readable by a computer, the computer program being capable of performing a surgical video playback control method in combination with a hardware computer.
본 발명의 기타 구체적인 사항들은 상세한 설명 및 도면들에 포함되어 있다.Other specific details of the invention are included in the detailed description and drawings.
개시된 실시 예에 따르면, 수술영상을 중요도에 따라 용이하게 생략 또는 요약하여 재생함으로써, 손쉽게 수술영상을 리뷰할 수 있는 효과가 있다.According to the disclosed embodiment, it is possible to easily review the surgical image by easily skipping or summarizing the surgical image according to the importance.
본 발명의 효과들은 이상에서 언급된 효과로 제한되지 않으며, 언급되지 않은 또 다른 효과들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The effects of the present invention are not limited to the above-mentioned effects, and other effects not mentioned can be clearly understood by those skilled in the art from the following description.
도 1은 개시된 실시 예에 따라 로봇수술을 수행할 수 있는 시스템을 간략하게 도식화한 도면이다.1 is a simplified schematic diagram of a system capable of performing robotic surgery in accordance with the disclosed embodiments.
도 2는 일 실시 예에 따라 수술영상의 재생을 제어하는 방법을 도시한 흐름도이다.2 is a flowchart illustrating a method of controlling reproduction of a surgical image according to an embodiment.
도 3은 수술동작을 계층적으로 분할 및 인식하는 방법의 일 예를 도시한 도면이다.FIG. 3 is a diagram showing an example of a method of hierarchically dividing and recognizing a surgical operation.
도 4는 일 실시 예에 따라 각 수술단계의 분할수준을 결정하고, 수술영상을 재생하는 방법을 설명하기 위한 도면이다. FIG. 4 is a view for explaining a method of determining a division level of each surgical stage and reproducing a surgical image according to an embodiment.
도 5는 일 실시 예에 따라 이벤트를 포함하는 수술 영상의 재생을 제어하는 방법을 설명하기 위한 도면이다.5 is a view for explaining a method of controlling reproduction of a surgical image including an event according to an embodiment.
도 6은 일 실시 예에 따른 장치의 구성도이다.6 is a configuration diagram of an apparatus according to an embodiment.
본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나, 본 발명은 이하에서 개시되는 실시예들에 제한되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하고, 본 발명이 속하는 기술 분야의 통상의 기술자에게 본 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다. BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. It should be understood, however, that the invention is not limited to the disclosed embodiments, but may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, Is provided to fully convey the scope of the present invention to a technician, and the present invention is only defined by the scope of the claims.
본 명세서에서 사용된 용어는 실시예들을 설명하기 위한 것이며 본 발명을 제한하고자 하는 것은 아니다. 본 명세서에서, 단수형은 문구에서 특별히 언급하지 않는 한 복수형도 포함한다. 명세서에서 사용되는 "포함한다(comprises)" 및/또는 "포함하는(comprising)"은 언급된 구성요소 외에 하나 이상의 다른 구성요소의 존재 또는 추가를 배제하지 않는다. 명세서 전체에 걸쳐 동일한 도면 부호는 동일한 구성 요소를 지칭하며, "및/또는"은 언급된 구성요소들의 각각 및 하나 이상의 모든 조합을 포함한다. 비록 "제1", "제2" 등이 다양한 구성요소들을 서술하기 위해서 사용되나, 이들 구성요소들은 이들 용어에 의해 제한되지 않음은 물론이다. 이들 용어들은 단지 하나의 구성요소를 다른 구성요소와 구별하기 위하여 사용하는 것이다. 따라서, 이하에서 언급되는 제1 구성요소는 본 발명의 기술적 사상 내에서 제2 구성요소일 수도 있음은 물론이다.The terminology used herein is for the purpose of illustrating embodiments and is not intended to be limiting of the present invention. In the present specification, the singular form includes plural forms unless otherwise specified in the specification. The terms " comprises "and / or" comprising "used in the specification do not exclude the presence or addition of one or more other elements in addition to the stated element. Like reference numerals refer to like elements throughout the specification and "and / or" include each and every combination of one or more of the elements mentioned. Although "first "," second "and the like are used to describe various components, it is needless to say that these components are not limited by these terms. These terms are used only to distinguish one component from another. Therefore, it goes without saying that the first component mentioned below may be the second component within the technical scope of the present invention.
다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어(기술 및 과학적 용어를 포함)는 본 발명이 속하는 기술분야의 통상의 기술자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또한, 일반적으로 사용되는 사전에 정의되어 있는 용어들은 명백하게 특별히 정의되어 있지 않는 한 이상적으로 또는 과도하게 해석되지 않는다.Unless defined otherwise, all terms (including technical and scientific terms) used herein may be used in a sense that is commonly understood by one of ordinary skill in the art to which this invention belongs. In addition, commonly used predefined terms are not ideally or excessively interpreted unless explicitly defined otherwise.
명세서에서 사용되는 "부" 또는 “모듈”이라는 용어는 소프트웨어, FPGA 또는 ASIC과 같은 하드웨어 구성요소를 의미하며, "부" 또는 “모듈”은 어떤 역할들을 수행한다. 그렇지만 "부" 또는 “모듈”은 소프트웨어 또는 하드웨어에 한정되는 의미는 아니다. "부" 또는 “모듈”은 어드레싱할 수 있는 저장 매체에 있도록 구성될 수도 있고 하나 또는 그 이상의 프로세서들을 재생시키도록 구성될 수도 있다. 따라서, 일 예로서 "부" 또는 “모듈”은 소프트웨어 구성요소들, 객체지향 소프트웨어 구성요소들, 클래스 구성요소들 및 태스크 구성요소들과 같은 구성요소들과, 프로세스들, 함수들, 속성들, 프로시저들, 서브루틴들, 프로그램 코드의 세그먼트들, 드라이버들, 펌웨어, 마이크로 코드, 회로, 데이터, 데이터베이스, 데이터 구조들, 테이블들, 어레이들 및 변수들을 포함한다. 구성요소들과 "부" 또는 “모듈”들 안에서 제공되는 기능은 더 작은 수의 구성요소들 및 "부" 또는 “모듈”들로 결합되거나 추가적인 구성요소들과 "부" 또는 “모듈”들로 더 분리될 수 있다.As used herein, the term "part" or "module" refers to a hardware component, such as a software, FPGA, or ASIC, and a "component" or "module" performs certain roles. However, "part" or " module " is not meant to be limited to software or hardware. A "module " or " module " may be configured to reside on an addressable storage medium and configured to play back one or more processors. Thus, by way of example, "a" or " module " is intended to encompass all types of elements, such as software components, object oriented software components, class components and task components, Microcode, circuitry, data, databases, data structures, tables, arrays, and variables, as used herein. Or " modules " may be combined with a smaller number of components and "parts " or " modules " Can be further separated.
이하, 첨부된 도면을 참조하여 본 발명의 실시예를 상세하게 설명한다. Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 개시된 실시 예에 따라 로봇수술을 수행할 수 있는 시스템을 간략하게 도식화한 도면이다.1 is a simplified schematic diagram of a system capable of performing robotic surgery in accordance with the disclosed embodiments.
도 1에 따르면, 로봇수술 시스템은 의료영상 촬영장비(10), 서버(20) 및 수술실에 구비된 제어부(30), 디스플레이(32) 및 수술로봇(34)을 포함한다. 실시 예에 따라서, 의료영상 촬영장비(10)는 개시된 실시 예에 따른 로봇수술 시스템에서 생략될 수 있다.1, the robotic surgery system includes a medical imaging apparatus 10, a server 20, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34. Depending on the embodiment, the medical imaging equipment 10 may be omitted from the robotic surgery system according to the disclosed embodiment.
일 실시 예에서, 수술로봇(34)은 촬영장치(36) 및 수술도구(38)를 포함한다.In one embodiment, the surgical robot 34 includes a photographing device 36 and a surgical tool 38.
일 실시 예에서, 로봇수술은 사용자가 제어부(30)를 이용하여 수술용 로봇(34)을 제어함으로써 수행된다. 일 실시 예에서, 로봇수술은 사용자의 제어 없이 제어부(30)에 의하여 자동으로 수행될 수도 있다.In one embodiment, robotic surgery is performed by the user controlling the surgical robot 34 using the control unit 30. [ In one embodiment, robot surgery may be performed automatically by the control unit 30 without user control.
서버(20)는 적어도 하나의 프로세서와 통신부를 포함하는 컴퓨팅 장치이다.The server 20 is a computing device including at least one processor and a communication unit.
제어부(30)는 적어도 하나의 프로세서와 통신부를 포함하는 컴퓨팅 장치를 포함한다. 일 실시 예에서, 제어부(30)는 수술용 로봇(34)을 제어하기 위한 하드웨어 및 소프트웨어 인터페이스를 포함한다.The control unit 30 includes a computing device including at least one processor and a communication unit. In one embodiment, the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
촬영장치(36)는 적어도 하나의 이미지 센서를 포함한다. 즉, 촬영장치(36)는 적어도 하나의 카메라 장치를 포함하여, 대상체, 즉 수술부위를 촬영하는 데 이용된다. 일 실시 예에서, 촬영장치(36)는 수술로봇(34)의 수술 암(arm)과 결합된 적어도 하나의 카메라를 포함한다.The photographing apparatus 36 includes at least one image sensor. That is, the photographing device 36 includes at least one camera device and is used to photograph a target object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
일 실시 예에서, 촬영장치(36)에서 촬영된 영상은 디스플레이(340)에 표시된다.In one embodiment, the image photographed at the photographing device 36 is displayed on the display 340. [
일 실시 예에서, 수술로봇(34)은 수술부위의 절단, 클리핑, 고정, 잡기 동작 등을 수행할 수 있는 하나 이상의 수술도구(38)를 포함한다. 수술도구(38)는 수술로봇(34)의 수술 암과 결합되어 이용된다.In one embodiment, the surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, anchoring, grabbing, etc., of the surgical site. The surgical tool 38 is used in combination with the surgical arm of the surgical robot 34.
제어부(30)는 서버(20)로부터 수술에 필요한 정보를 수신하거나, 수술에 필요한 정보를 생성하여 사용자에게 제공한다. 예를 들어, 제어부(30)는 생성 또는 수신된, 수술에 필요한 정보를 디스플레이(32)에 표시한다.The control unit 30 receives information necessary for surgery from the server 20, or generates information necessary for surgery and provides the information to the user. For example, the control unit 30 displays on the display 32 information necessary for surgery, which is generated or received.
예를 들어, 사용자는 디스플레이(32)를 보면서 제어부(30)를 조작하여 수술로봇(34)의 움직임을 제어함으로써 로봇수술을 수행한다.For example, the user operates the control unit 30 while viewing the display 32 to perform the robot surgery by controlling the movement of the surgical robot 34.
서버(20)는 의료영상 촬영장비(10)로부터 사전에 촬영된 대상체의 의료영상데이터를 이용하여 로봇수술에 필요한 정보를 생성하고, 생성된 정보를 제어부(30)에 제공한다. The server 20 generates information necessary for the robot surgery using the medical image data of the object photographed previously from the medical imaging apparatus 10 and provides the generated information to the control unit 30. [
제어부(30)는 서버(20)로부터 수신된 정보를 디스플레이(32)에 표시함으로써 사용자에게 제공하거나, 서버(20)로부터 수신된 정보를 이용하여 수술로봇(34)을 제어한다.The control unit 30 provides the information received from the server 20 to the user by displaying the information on the display 32 or controls the surgical robot 34 using the information received from the server 20. [
일 실시 예에서, 의료영상 촬영장비(10)에서 사용될 수 있는 수단은 제한되지 않으며, 예를 들어 CT, X-Ray, PET, MRI 등 다른 다양한 의료영상 획득수단이 사용될 수 있다. In one embodiment, the means that can be used in the medical imaging equipment 10 is not limited, and various other medical imaging acquiring means such as CT, X-Ray, PET, MRI and the like may be used.
개시된 실시 예에서, 촬영장치(36)에서 획득되는 수술영상은 제어부(30)로 전달된다.In the disclosed embodiment, the surgical image obtained in the photographing device 36 is transmitted to the control section 30. [
일 실시 예에서, 제어부(30)는 수술 중에 획득되는 수술영상을 실시간으로 분할(segmentation)할 수 있다.In one embodiment, the control unit 30 may segment the surgical image obtained during the operation in real time.
일 실시 예에서, 제어부(30)는 수술 중 또는 수술이 완료된 후 수술영상을 서버(20)에 전송한다.In one embodiment, the control unit 30 transmits a surgical image to the server 20 during or after surgery.
서버(20)는 수술영상을 분할하여 분석할 수 있다.The server 20 can divide and analyze the surgical image.
서버(20)는 수술영상을 분할하여 분석하기 위한 적어도 하나의 모델을 학습 및 저장한다. 또한, 서버(20)는 최적화된 수술 프로세스를 생성하기 위한 적어도 하나의 모델을 학습 및 저장한다.The server 20 learns and stores at least one model for dividing and analyzing the surgical image. In addition, the server 20 learns and stores at least one model for generating an optimized surgical process.
또한, 서버(20) 또는 클라이언트(예를 들어, 제어부(30)를 포함)는 획득된 수술 영상을 표시할 수 있다. 하지만, 수술 영상은 일반적으로 매우 길기 때문에, 연구자나 의사가 수술 영상을 모두 리뷰하는 것은 현실적으로 어렵다.In addition, the server 20 or the client (including the control unit 30, for example) can display the obtained surgical image. However, since the surgical image is generally very long, it is practically difficult for the researcher or the doctor to review the surgical image.
이하에서는, 서버(20) 또는 클라이언트가 수술 영상을 표시하되, 중요도에 따라 각 부분을 생략하거나, 요약하여 표시하는 방법에 대해서 구체적으로 설명한다.Hereinafter, a method of displaying a surgical image by the server 20 or a client, omitting each part according to importance, or displaying in summary, will be described in detail.
도 2는 일 실시 예에 따라 수술영상의 재생을 제어하는 방법을 도시한 흐름도이다.2 is a flowchart illustrating a method of controlling reproduction of a surgical image according to an embodiment.
도 2를 참조하면, 도 1에서 도시된 서버(20) 또는 클라이언트에서 수행될 수 있는 단계들이 시계열적으로 도시되어 있다. 이하에서는, 설명의 편의를 위하여 컴퓨터가 도 2에 도시된 각 단계들을 수행하는 것으로 설명한다. 하지만, 도 2에 도시된 각 단계들의 전부 또는 일부가 각각 서버(20) 또는 클라이언트에서 수행될 수 있다.Referring to FIG. 2, steps that may be performed on the server 20 or client shown in FIG. 1 are shown in a time-series manner. Hereinafter, it is assumed that the computer performs the steps shown in FIG. 2 for convenience of explanation. However, all or some of the steps shown in FIG. 2 may be performed in the server 20 or the client, respectively.
단계 S110에서, 컴퓨터는 수술영상을 획득한다. In step S110, the computer acquires a surgical image.
본 명세서에서, 수술영상은 수술로봇(34)에 의하여 실제로 수행된 수술 영상일 수도 있고, 의료영상 촬영장비(10)로부터 획득된 영상에 기초하여 수행된 시뮬레이션 영상일 수도 있다. 또한, 수술영상은 의료영상 촬영장비(10)로부터 획득된 영상에 기초하여 생성된, 최적화된 수술방법에 따른 영상일 수도 있다.In the present specification, the surgical image may be a surgical image actually performed by the surgical robot 34, or may be a simulation image performed based on the image obtained from the medical imaging apparatus 10. [ In addition, the surgical image may be an image according to the optimized surgical method, which is generated based on the image obtained from the medical imaging apparatus 10.
또한, 본 명세서의 수술영상은 실제로 수술과정이 촬영된 영상을 의미할 수도 있고, 의료영상 촬영장비(10)로부터 획득되는 의료 영상에 기초하여 생성되는 3D 모델링 영상을 의미할 수도 있다. 마찬가지로, 본 명세서에서 언급되는 수술영상의 종류는 제한되지 않으며, 적어도 일부에 수술과정을 포함하는 모든 종류의 영상을 포함하는 의미로서 이해될 수 있다.In addition, the surgical image in the present specification may mean an image in which a surgical procedure is actually photographed, or a 3D modeling image generated based on a medical image obtained from the medical image photographing apparatus 10. Likewise, the type of surgical image referred to in the present specification is not limited, and may be understood as meaning including all types of images including at least a part of the surgical procedure.
일 실시 예에서, 컴퓨터는 단계 S110에서 획득된 수술영상을 하나 이상의 영역으로 분할(segmentation)한 정보를 획득한다. 일 실시 예에서, 수술 영상은 컴퓨터에 의하여 자동으로 분할된다.In one embodiment, the computer obtains information obtained by segmenting the surgical image obtained in step S110 into one or more areas. In one embodiment, the surgical image is automatically segmented by the computer.
일 실시 예에서, 수술 영상은 다양한 기준으로 분할될 수 있다. 일 예로, 수술 영상은 영상에 포함된 객체의 종류를 기초로 하여 분할될 수 있다. 객체의 종류를 기초로 하는 분할방법은 컴퓨터가 각 객체를 인식하는 단계를 필요로 한다.In one embodiment, the surgical image may be segmented by various criteria. As an example, a surgical image may be segmented based on the type of object included in the image. The division method based on the kind of object requires a step in which the computer recognizes each object.
수술 영상에서 인식되는 객체는 크게 인체, 외부에서 유입된 객체 및 자체적으로 생성된 객체를 포함한다. 인체는 수술에 선행되는 의료영상 촬영(예를 들어, CT)에 의하여 촬영되는 신체부위와 촬영되지 않는 신체부위를 포함한다.The object recognized in the surgical image includes the human body, the object introduced from the outside, and the object generated by itself. The human body includes a body part taken by medical imaging (e.g., CT) followed by surgery and a body part not taken.
예를 들어, 의료영상 촬영에 의하여 촬영되는 신체부위는 장기, 혈관, 뼈, 힘줄 등을 포함하며, 이러한 신체부위는 의료영상에 기초하여 생성되는 3D 모델링 영상에 기초하여 인식될 수 있다. For example, a body part photographed by a medical imaging includes an organ, a blood vessel, a bone, a tendon, etc., and such a body part can be recognized based on a 3D modeling image generated based on a medical image.
구체적으로, 각 신체부위의 위치와 크기, 모양 등이 의료영상에 기초한 3D 분석방법에 의하여 사전에 인지된다. 컴퓨터는 실시간으로 수술영상에 대응하는 신체부위의 위치를 파악할 수 있는 알고리즘을 정의하고, 이에 기초하여 별도의 이미지 인식을 수행하지 않아도 수술영상에 포함되는 각 신체부위의 위치, 크기 및 모양 등에 대한 정보를 획득할 수 있다. 또한 의료영상 촬영에 의하여 촬영되지 않는 신체부위는 오멘텀(omentum) 등을 포함하며, 이는 의료영상에 의하여 촬영되지 않으므로 수술중에 실시간으로 인식하는 것이 필요하다. 예를 들어, 컴퓨터는 이미지 인식방법을 통하여 오멘텀의 위치 및 크기를 판단하고, 오멘텀 내부에 혈관이 있는 경우 혈관의 위치 또한 예측할 수 있다.Specifically, the position, size, and shape of each body part are recognized in advance by a 3D analysis method based on a medical image. The computer defines an algorithm that can grasp the position of a body part corresponding to a surgical image in real time, and based on the information, information on the position, size, and shape of each body part included in the surgical image Can be obtained. Also, body parts not taken by medical imaging include omentum, which is not captured by medical images, so it is necessary to recognize them in real time during surgery. For example, the computer can determine the location and size of the omentum using image recognition methods, and predict the location of the vessel in the presence of blood vessels within the omentum.
외부에서 유입된 객체는, 예를 들어 수술도구, 거즈, 클립 등을 포함한다. 이는 기 설정된 형태적 특징을 가지므로, 컴퓨터가 수술중에 이미지 분석을 통하여 실시간으로 인식할 수 있다.Objects introduced from the outside include, for example, surgical tools, gauzes, clips, and the like. Since it has predetermined morphological characteristics, it can recognize the computer in real time through image analysis during surgery.
내부에서 생성되는 객체는, 예를 들어 신체부위에서 발생하는 출혈 등을 포함한다. 이는 컴퓨터가 수술중에 이미지 분석을 통하여 실시간으로 인식할 수 있다.Internally generated objects include, for example, bleeding from the body part. This allows the computer to recognize images in real time during image analysis during surgery.
신체부위에 포함된 장기나 오멘텀의 움직임, 그리고 객체가 내부에서 생성되는 원인은 모두 외부에서 유입된 객체의 움직임에 기인한다.The movements of organs and organs included in body parts and the causes of objects are all caused by the movement of objects from outside.
따라서, 수술 영상은 각 객체를 인식하는 것에 더하여, 각 객체의 움직임에 기초하여 분할될 수 있다. 일 실시 예에서, 수술 영상은 외부에서 유입된 객체의 움직임, 즉 액션에 기초하여 분할될 수 있다.Thus, in addition to recognizing each object, a surgical image can be segmented based on the motion of each object. In one embodiment, the surgical image may be segmented based on the motion, i.e., action, of the externally introduced object.
컴퓨터는 수술영상에서 인식된 각 객체의 종류를 판단하고, 각 객체의 종류에 따라 사전에 정의된 특정한 동작, 일련의 동작, 동작에 따라 발생하는 상황이나 결과 등에 기초하여, 각 객체의 움직임, 즉 액션을 인식할 수 있다.The computer judges the type of each object recognized in the surgical image. The computer determines a motion of each object, that is, a motion of each object based on a predetermined operation defined in advance, a series of operations, The action can be recognized.
컴퓨터는 각 액션의 종류를 인식하고, 나아가 각 액션의 원인 또한 인식할 수 있다. 컴퓨터는 인식되는 액션에 기초하여 수술영상을 분할할 수 있고, 단계적 분할을 통해 각각의 세부수술동작부터 전체 수술의 종류까지 인식할 수 있다.The computer recognizes the type of each action, and also recognizes the cause of each action. The computer can divide the surgical image based on the recognized action, and it can recognize from each detailed operation to the type of the whole operation through the stepwise division.
나아가, 컴퓨터는 액션에 대한 판단으로부터 수술영상에 대응하는, 기 정의된 수술의 종류를 판단한다. 수술의 종류를 판단하는 경우, 전체 수술 프로세스에 대한 정보를 획득할 수 있다. 동일한 종류의 수술에 대하여 복수 개의 수술 프로세스가 존재하는 경우, 의사의 선택에 따라서, 또는 특정 시점까지 인식된 액션들에 기초하여 하나의 수술 프로세스를 선택할 수 있다.Furthermore, the computer determines the type of predefined operation corresponding to the surgical image from the judgment of the action. When determining the type of surgery, information about the entire surgical procedure can be obtained. If there are multiple surgical processes for the same kind of surgery, one surgical process may be selected based on the doctor's choice, or based on actions recognized up to a certain point in time.
컴퓨터는 획득된 수술 프로세스에 기초하여 수술단계를 인식 및 예측할 수 있다. 예를 들어, 일련의 수술 프로세스 중 특정 단계가 인식되는 경우, 이에 후속되는 단계들을 예측하거나 가능한 단계들의 후보를 추려낼 수 있다. 따라서, 오멘텀 등에 의하여 발생하는 수술영상 인식의 오류율을 크게 낮출 수 있다. 또한, 수술영상이 예측가능한 수술단계로부터 소정의 오차범위 이상 크게 벗어나는 경우, 수술오류(surgical error)상황이 발생한 것으로 인식할 수도 있다.The computer can recognize and predict the surgical stage based on the acquired surgical procedure. For example, if a particular step in a series of surgical procedures is recognized, then the steps following it can be predicted or candidates for possible steps can be culled. Therefore, it is possible to greatly reduce the error rate of the surgical image recognition caused by the occlusion or the like. Further, when the surgical image deviates greatly from the predictable surgical stage by more than a predetermined error range, it may be recognized that a surgical error situation has occurred.
또한, 컴퓨터는 각각의 액션에 대한 인식에 기반하여, 각각의 액션에 대한 판단을 할 수 있다. 예를 들어, 컴퓨터는 각각의 액션에 대한 필요성(necessity)과 효율성(effectiveness)을 인식할 수 있다.In addition, the computer can make a judgment on each action based on the recognition of each action. For example, a computer can recognize the necessity and effectiveness of each action.
구체적으로, 컴퓨터는 각각의 액션이 필요한 것이었는지, 또는 불필요한 것이었는지에 대한 판단을 할 수 있다. 또한, 컴퓨터는 각각의 액션이 필요한 것이었던 경우에도, 각 액션이 효율적으로 수행되었는지에 대한 판단을 할 수 있다. 이는 수술성적 리포트를 제공하고, 수술과정에서 불필요한 동작을 배제하고, 비효율적인 동작을 효율화하는 데 이용된다.Specifically, the computer can make a judgment as to whether each action was necessary or unnecessary. In addition, the computer can determine whether each action was performed efficiently, even if each action was required. This is used to provide an operative report, eliminate unnecessary operations in the surgical procedure, and streamline inefficient operations.
상술한 바와 같이, 수술영상은 크게 신체부위(장기 및 오멘텀), 외부에서 유입된 객체, 내부에서 생성되는 객체, 액션, 수술의 종류, 각 액션의 필요성과 효율성을 포함하는 구성요소들로 분할될 수 있다. 즉, 컴퓨터는 수술영상을 전체로서 인식하는 대신, 수술영상의 가능한 한 모든 요소를 포함하되, 상호간 중복을 최소화하는 구성요소 단위로 수술영상을 분할하고, 분할된 구성요소 단위에 기초하여 수술영상을 인식함으로써, 더 구체적이고 더 용이하게 수술영상을 인식할 수 있다.As described above, the surgical image is largely divided into components including body parts (organ and omentum), objects introduced from the outside, objects generated inside, actions, types of surgery, necessity and efficiency of each action . That is, instead of recognizing the surgical image as a whole, the computer divides the surgical image into a component unit that minimizes mutual overlapping, including all elements of the surgical image as much as possible. Based on the divided component units, By recognizing, the surgical image can be recognized more specifically and more easily.
일 실시 예에서, 컴퓨터는 수술 영상을 계층적(또는 단계적)으로 분할할 수 있다.In one embodiment, the computer may divide the surgical image hierarchically (or stepwise).
일 실시 예에서, 컴퓨터는 수술 영상을 하나 이상의 분류단위로 계층적(또는 단계적)으로 분할하고, 분할된 각 분류단위에 대응하는 동작을 계층적(또는 단계적)으로 인식할 수 있다.In one embodiment, the computer divides the surgical image hierarchically (or stepwise) into one or more classification units, and recognizes the operations corresponding to each of the divided classification units hierarchically (or stepwise).
예를 들어, 컴퓨터는 수술영상에 포함된 제1 분류단위, 제2 분류단위, 제3 분류단위 및 제4 분류단위의 동작을 차례로 인식할 수 있다.For example, the computer may sequentially recognize the operations of the first classification unit, the second classification unit, the third classification unit, and the fourth classification unit included in the surgical image.
예를 들어, 제1 분류단위는 구성요소(component)단위이고, 제2 분류단위는 세부 분할동작(subsegment) 단위이고, 제3 분류단위는 분할동작(segment) 단위이고, 제4 분류단위는 수술(operation) 단위일 수 있으나, 이에 제한되는 것은 아니다.For example, the first classification unit is a component unit, the second classification unit is a subsegment unit, the third classification unit is a segmentation unit, the fourth classification unit is a subsegmentation unit, but it is not limited thereto.
도 3을 참조하면, 수술동작을 계층적으로 분할 및 인식하는 방법의 일 예가 도시되어 있다.Referring to FIG. 3, an example of a method of hierarchically dividing and recognizing a surgical operation is shown.
도 3을 참조하면, 수술동작을 제1 분류단위(210)로 분할하여 인식하고, 이로부터 제2 분류단위(220), 제3 분류단위(230) 및 제4 분류단위(240)로 각각 단계적으로 분할하여 인식하는 방법이 개념적으로 도시되어 있다.Referring to FIG. 3, the surgical operation is divided into a first classification unit 210 and a second classification unit 220, a third classification unit 230, and a fourth classification unit 240, And a method of recognizing the divided data is schematically shown.
일 실시 예에서, 도 3에 도시된 각각의 코드는 각각의 분류단위에 포함되는 동작들을 식별할 수 있는 기 설정된 코드를 의미할 수 있다.In one embodiment, each code shown in FIG. 3 may refer to a pre-established code that can identify actions included in each classification unit.
제한되지 않는 예로서, 제1 분류단위의 동작은 잡기, 자르기, 이동하기 등을 포함하고, 제2 분류단위의 동작은 혈관절단, 지방제거 등을 포함하고, 제3 분류단위의 동작은 개복, 장기절제, 장기연결, 봉합 등을 포함하고, 제4 분류단위의 동작은 위암수술을 포함할 수 있다.By way of example, and not by way of limitation, the operation of the first classification unit may include capturing, cutting, moving, etc., and the operation of the second classification unit may include vascularization, fat removal, Long term resection, long term resection, suture and the like, and the operation of the fourth classification unit may include gastric cancer surgery.
즉, 위암수술을 예로 들었을 때 위암수술의 각 동작은 크게 개복, 위절제, 장기연결 및 봉합을 포함할 수 있고, 각각의 동작은 더 구체적으로 혈관절단, 위의 각 부위 절단, 위의 각 부위와 다른 장기의 일부 부위의 연결 등을 포함할 수 있고, 각각의 동작은 더 구체적으로 혈관절단, 지방 등 장애물 제거 등으로 구체화될 수 있으며, 이는 더 세부적으로 단순한 이동, 잡기, 절단 등의 동작으로 더 구체화될 수 있다.In other words, when gastric cancer surgery is taken as an example, each operation of gastric cancer surgery can largely include laparotomy, gastrectomy, long-term connection and suture, and each operation is more concretely a vasectomy, And the connection of some parts of other organs and the like, and each operation can be more specifically embodied by cutting the blood vessels, removing obstacles such as fat, etc., and this can be accomplished by more simple operations such as movement, Can be further specified.
개시된 실시 예에 따르면 이러한 hierarchy를 역으로 이용하여, 수술동작을 최소 세부단위로 분할하여 인식하고, 분할하여 인식된 결과를 이용하여 단계적으로 상위 동작을 인식하도록 컴퓨터를 학습시킬 수 있다.According to the disclosed embodiment, the hierarchy can be used in reverse, and the operation can be divided into the minimum detail unit, and the computer can be learned to recognize the upper operation step by step using the divided result.
이러한 단계적 접근방법 없이 수술영상을 이미지 프로세싱하여 상위 단계에 해당하는 수술동작을 정확하게 인식하기는 상대적으로 어려울 수 있다. 수술부위는 환자마다, 질병마다 모양이 다르고, 또한 수술의 종류에 따라 수술동작의 양상은 모두 상이하기 때문이다.It is relatively difficult to accurately recognize the surgical operation corresponding to the upper stage by image processing the surgical image without such a stepwise approach. The surgical site is different for each patient, each disease differs in shape, and the operation patterns are different depending on the type of operation.
개시된 실시 예에 따르면, 상대적으로 환자의 신체조건이나 수술의 종류 등에 의하여 영향을 덜 받는 세부 수술동작(예를 들어, 자르기, 잡기 등)을 인식하는 것에서부터 시작해, 기계학습을 통해 일련의 세부동작들이 의미하는 상위 수술동작을 인식하고, 단계적으로 더 큰 단위의 수술동작, 나아가 수술의 종류까지 인식할 수 있는 학습모델을 제공할 수 있다.According to the disclosed embodiment, it is possible to start from a recognition of a detailed operation (for example, cutting, catching, etc.) less affected by the physical condition of the patient or the type of operation, It is possible to provide a learning model capable of recognizing the upper operation operation which is meant by a larger unit of operation and further recognizing the type of surgery.
개시된 실시 예에 따른 학습모델을 이용하면, 환자의 신체상태나 수술의 종류와 상관없이 어디에나 적용가능한 수술동작 인식모델을 제공할 수 있으며, 필요한 경우 미세조정(fine tuning)을 이용하여 각 환자의 신체상태 또는 수술의 종류에 맞도록 특화된 수술동작 인식모델을 제공할 수도 있다.Using the learning model according to the disclosed embodiment, it is possible to provide a surgical motion recognition model that can be applied to the patient regardless of the physical condition of the patient or the type of surgery, and, if necessary, Or may provide a surgical motion recognition model that is tailored to the type of condition or surgery.
일 실시 예에서, 컴퓨터는 수술영상에서 발생하는 이벤트를 인식할 수 있다. 예를 들어, 이벤트는 출혈 등 수술오류(surgical error)상황을 포함한다. 이벤트가 발생하는 경우, 컴퓨터는 수술영상에 대한 이미지 프로세싱을 통하여 이를 인식할 수 있다.In one embodiment, the computer can recognize an event that occurs in a surgical image. For example, the event includes a surgical error situation, such as bleeding. When an event occurs, the computer can recognize this through image processing of the surgical image.
컴퓨터는 이벤트가 인식되는 경우, 수술영상을 인식된 이벤트를 포함하는 하나 이상의 이벤트 그룹으로 분할할 수 있다. 분할된 이벤트 그룹은 별도로 관리될 수도 있고, 개시된 실시 예에 따른 분류단위에 포함되거나, 독립적인 분류단위로서 수술동작의 분석에 활용될 수도 있다.When the event is recognized, the computer may divide the surgical image into one or more event groups including recognized events. The divided event groups may be managed separately, included in a classification unit according to the disclosed embodiment, or may be utilized as an independent classification unit for analysis of a surgical operation.
일 실시 예에서, 컴퓨터는 인식된 이벤트와, 이벤트가 인식된 시점 전후의 수술동작에 기초하여 이벤트의 발생원인을 판단할 수 있다.In one embodiment, the computer can determine the cause of the event based on the recognized event and the surgical operation before and after the event was recognized.
예를 들어, 컴퓨터는 이벤트가 발생하는 경우, 이벤트가 발생한 시점 전후의 소정 분류단위의 동작들을 이벤트에 대한 정보와 함께 저장함으로써 이벤트의 발생원인을 분석하기 위한 학습 데이터를 생성할 수 있다. For example, when an event occurs, the computer may generate learning data for analyzing the cause of the event by storing the operations of the predetermined classification unit before and after the occurrence of the event together with information on the event.
컴퓨터는 생성된 학습 데이터를 이용하여 학습을 수행하고, 각 분류단위의 수술동작들과 이벤트 간의 상관관계를 학습할 수 있다.The computer can perform learning using the generated learning data, and learn the correlation between the operation and the events of each classification unit.
컴퓨터는 학습 결과에 기초하여, 이벤트가 발생하는 경우 그 원인을 판단하고 사용자에게 피드백을 제공할 수 있다.Based on the learning result, the computer can determine the cause of the event occurrence and provide feedback to the user.
일 실시 예에서, 컴퓨터는 소정 분류단위의 동작에 기초하여 수술동작의 최적화를 위한 학습을 수행할 수 있다.In one embodiment, the computer may perform learning for optimization of surgical operations based on operation of a given classification unit.
예를 들어, 컴퓨터는 환자의 신체상태 및 수술의 종류별로 각 분류단위의 동작을 수행하는 순서 및 방법에 있어 최적화된 순서 및 방법을 학습할 수 있다.For example, the computer can learn an optimized sequence and method for performing the operation of each classification unit according to the physical condition of the patient and the type of surgery.
일 실시 예에서, 컴퓨터는 제1 분류단위의 동작에 기초하여 수술동작의 최적화를 위한 학습을 수행할 수 있다. 예를 들어, 컴퓨터는 하나 이상의 레퍼런스 수술정보를 획득할 수 있다. 컴퓨터는 하나 이상의 레퍼런스 수술정보에 포함되는 수술동작의 순서에 기초하여 학습을 수행하고, 학습 결과에 따라 수술별로 최적화된 수술동작의 순서를 판단할 수 있다.In one embodiment, the computer may perform learning for optimization of the surgical operation based on the operation of the first classification unit. For example, the computer may obtain one or more reference surgery information. The computer can perform learning based on the order of the operation operations included in the one or more reference operation information and determine the order of the optimized operation operations for each operation according to the learning results.
제1 분류단위의 동작은 어느 수술에서든 공통적으로 적용되는 최소단위의 동작이므로, 제1 분류단위에 기초하여 학습을 수행하는 경우, 수술의 종류 및 환자의 신체상태와 무관하게 최적화된 수술동작의 순서를 판단할 수 있는 학습모델을 획득할 수 있다. 마찬가지로, 또한 학습된 모델에 대한 미세조정을 통하여 수술의 종류 및 환자의 신체상태에 따라 최적화된 학습모델을 획득하는 것도 가능하다.Since the operation of the first classification unit is a minimum unit operation commonly applied in any operation, when learning is performed based on the first classification unit, the order of the optimized operation operations regardless of the type of operation and the body condition of the patient Can be obtained. Likewise, it is also possible to obtain an optimized learning model according to the type of surgery and the patient's physical condition through fine adjustment to the learned model.
단계 S130에서, 컴퓨터는 단계 S120에서 분할된 하나 이상의 영역 각각에 대응하는 수술단계에 대한 정보를 획득한다.In step S130, the computer obtains information on the surgical steps corresponding to each of the one or more areas divided in step S120.
일 실시 예에서, 컴퓨터는 각 수술단계의 중요도를 판단(단계 S140)하고, 판단된 중요도에 기초하여 수술영상의 재생을 제어(단계 S150)한다.In one embodiment, the computer determines the importance of each surgical step (step S140), and controls the reproduction of the surgical image based on the determined importance (step S150).
일 실시 예에서, 컴퓨터는 수술의 종류를 판단하고, 판단된 수술의 종류에 기초하여 각 수술단계의 중요도를 판단한다. 예를 들어, 특정 수술단계가 특정 수술에 대해서는 중요할 수 있으나, 다른 수술에서는 중요도가 떨어질 수 있다. 따라서, 컴퓨터는 상술한 방법에 따라 수술 영상으로부터 수술의 종류를 판단하고, 판단된 수술 종류에 기초하여, 수술 영상을 분할한 각 수술단계의 중요도를 판단할 수 있다.In one embodiment, the computer determines the type of operation and determines the importance of each surgical stage based on the type of operation that was determined. For example, certain surgical steps may be important for certain operations, but may be less important for other operations. Accordingly, the computer can determine the type of operation from the surgical image according to the above-described method, and determine the importance of each surgical step in which the surgical image is divided based on the determined type of operation.
또한, 컴퓨터는 수술단계 각각의 중요도에 기초하여, 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정한다. 예를 들어, 상대적으로 중요도가 떨어지는 수술단계는 재생하지 않거나, 많이 요약하여 재생할 수 있다. 마찬가지로, 상대적으로 중요도가 높은 수술단계는 요약하지 않거나, 적게 요약하여 재생할 수 있다.Further, the computer determines whether or not to reproduce the surgical image corresponding to each surgical stage and the summarization level, based on the importance of each of the surgical stages. For example, a surgical stage with a relatively low importance may not be reproduced or may be reproduced in large numbers. Likewise, relatively more important surgical steps may not be summarized, or may be reproduced with less summary.
일 실시 예에서, 컴퓨터는 각각의 수술단계에 대하여 판단된 중요도에 기초하여, 각 수술단계의 분할수준을 결정한다. 컴퓨터는 결정된 분할수준에 따라 각각의 수술단계를 분할하고, 분할된 수술단계 각각에 대응하는 수술영상을 재생하되, 분할된 수술단계 각각의 분할수준에 기초하여 분할된 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정할 수 있다.In one embodiment, the computer determines the segmentation level of each surgical step based on the determined importance for each surgical step. The computer divides each of the surgical steps according to the determined division level and regenerates the surgical image corresponding to each of the divided surgical steps so that the surgical image corresponding to each of the divided surgical steps, And the level of summarization.
도 4는 일 실시 예에 따라 각 수술단계의 분할수준을 결정하고, 수술영상을 재생하는 방법을 설명하기 위한 도면이다. FIG. 4 is a view for explaining a method of determining a division level of each surgical stage and reproducing a surgical image according to an embodiment.
도 4를 참조하면, 수술영상을 계층적으로 분할한 일 예에 대응하는 트리(300)가 도시되어 있다. 도 4에 도시된 트리(300)는 이진 트리로 도시되어 있으나, 수술 영상을 계층적으로 분할하는 자료구조는 이진 트리에 제한되지 않으며, 더 높은 차수의 트리 구조를 갖거나, 트리가 아닌 다른 구조를 가질 수도 있다.Referring to FIG. 4, a tree 300 corresponding to an example of hierarchically dividing a surgical image is shown. Although the tree 300 shown in FIG. 4 is shown as a binary tree, a data structure for hierarchically dividing an operation image is not limited to a binary tree, and may have a higher-order tree structure, .
도 4에서는, 설명의 편의를 위하여 도 3에 도시된 바와 같이 계층적으로 분할된 수술영상에 대한 정보가 도 4에 도시된 바와 같은 트리 형태로 저장되는 것으로 가정하여 설명한다.In FIG. 4, for convenience of explanation, it is assumed that information on the surgical images divided hierarchically as shown in FIG. 3 is stored in a tree form as shown in FIG.
일 실시 예에서, 트리(300)의 루트 노드(310)는 제4 분류단위, 즉 수술의 종류에 대응할 수 있다.In one embodiment, the root node 310 of the tree 300 may correspond to a fourth classification unit, i.e., the type of surgery.
또한, 루트 노드(310)의 자식 노드들(320 및 330)은 제3 분류단위에 대응하며, 그 자식 노드들은 제2 분류단위, 또 그 자식 노드들은 제1 분류단위에 각각 대응할 수 있으나, 이에 제한되는 것은 아니다.In addition, the child nodes 320 and 330 of the root node 310 correspond to the third classification unit, the child nodes thereof correspond to the second classification unit, and the child nodes correspond to the first classification unit. But is not limited to.
일 실시 예에서, 컴퓨터는 루트 노드(310)에서 수술의 종류를 판단하고, 이하에서 각각의 자식 노드들(320 및 330)에 대응하는 수술단계의 중요도를 판단할 수 있다. 예를 들어, 노드(320)는 지방제거 동작에 대응하고, 노드(330)는 위 절제 동작에 대응할 수 있다. In one embodiment, the computer can determine the type of surgery at the root node 310 and determine the importance of the surgical stage corresponding to each of the child nodes 320 and 330 below. For example, node 320 may correspond to a fat removal operation, and node 330 may correspond to a gastrostomy operation.
일 실시 예에서, 컴퓨터는 상대적으로 중요도가 떨어지는 지방제거 동작에 대응하는 노드(320)에 대응하는 수술 단계에 대해서는 추가 분할을 하지 않고, 요약된 수술영상을 재생하거나, 재생을 생략할 수 있다.In one embodiment, the computer may play a summarized surgical image, or skip playback, without further segmentation of the surgical stage corresponding to the node 320 corresponding to a relatively less important fat removal operation.
또한, 컴퓨터는 노드(330)에 대응하는 수술단계를 추가적으로 분할하고, 분할된 노드(340 및 350) 각각에 대응하는 수술단계의 중요도를 판단할 수 있다. In addition, the computer may further divide the surgical stage corresponding to node 330 and determine the importance of the surgical stage corresponding to each of the segmented nodes 340 and 350.
일 실시 예에서, 노드(350)는 상대적으로 중요도가 떨어지는 수술단계에 대응하여 추가 분할이 중지되고, 노드(350)에 대응하는 수술동작이 요약되어 재생될 수 있다. 노드(350)에 대한 요약수준은 노드(320)에 대한 요약수준보다 낮을 수 있다. 즉, 노드(320)에 대응하는 수술영상은 노드(350)에 대응하는 수술영상보다 더 많이 요약되어 표시될 수 있다.In one embodiment, the node 350 may stop further partitioning in response to the relatively insignificant surgical stage, and the surgical operation corresponding to the node 350 may be summarized and regenerated. The summary level for node 350 may be lower than the summary level for node 320. [ That is, the surgical image corresponding to the node 320 may be displayed more than the surgical image corresponding to the node 350.
또한, 노드(340)는 상대적으로 중요한 수술단계에 대응하여 추가로 분할되고, 노드(360 및 370) 각각에 대응하는 수술단계의 중요도가 판단될 수 있다. Further, the node 340 may be further divided in correspondence to a relatively important surgical stage, and the importance of the surgical stage corresponding to each of the nodes 360 and 370 may be determined.
노드(360)가 노드(370)보다 더 중요한 수술단계에 대응하는 경우, 컴퓨터는 노드(360)에 대응하는 수술 영상을 노드(370)에 대응하는 수술 영상보다 더 길게(또는 적게 요약하여) 표시할 수 있다.If the node 360 corresponds to a more important surgical stage than the node 370, then the computer displays the surgical image corresponding to the node 360 longer (or less summarized) than the surgical image corresponding to the node 370 can do.
도 5는 일 실시 예에 따라 이벤트를 포함하는 수술 영상의 재생을 제어하는 방법을 설명하기 위한 도면이다.5 is a view for explaining a method of controlling reproduction of a surgical image including an event according to an embodiment.
일 실시 예에서, 수술 영상에는 각 수술단계의 중요도와 무관하게 이벤트가 발생할 수 있다. 예를 들어, 이벤트는 출혈이나 수술 오류 상황 등을 포함할 수 있다.In one embodiment, an event may occur in the surgical image regardless of the importance of each surgical step. For example, an event may include bleeding or a surgical error situation.
예를 들어, 트리(400)에 포함된 노드(410)는 상대적으로 중요도가 떨어지는 지방제거 동작에 대응하지만, 노드(410)의 하위 노드들 중 하나인 노드(420)에 대응하는 수술단계에서 이벤트(예를 들어, 출혈)가 발생할 수 있다. For example, the node 410 included in the tree 400 corresponds to the fat removal operation of which the relative importance is low, but in the operation stage corresponding to the node 420, which is one of the child nodes of the node 410, (E. G., Bleeding) may occur.
이 경우, 컴퓨터는 노드(410)에 대응하는 수술영상을 요약하여 표시하되, 이벤트가 발생한 수술단계에 대응하는 노드(420)에 대응하는 수술영상은 요약하지 않거나, 상대적으로 적게 요약하여 표시할 수 있다. 예를 들어, 노드(420)에 대응하는 수술영상은, 노드(410)에 대응하는 요약된 수술영상의 상당 분량을 차지하도록 재생될 수 있다.In this case, the computer may summarize and display the surgical image corresponding to the node 410, but may not summarize the surgical image corresponding to the node 420 corresponding to the surgical stage in which the event occurred, have. For example, a surgical image corresponding to node 420 may be reproduced to occupy a substantial portion of the summarized surgical image corresponding to node 410. [
또한, 컴퓨터는 인식된 이벤트 각각의 중요도를 판단할 수 있다. 컴퓨터는 판단된 중요도에 기초하여, 이벤트에 대응하는 수술영상의 재생을 제어할 수 있다. 예를 들어, 컴퓨터는 이벤트에 대응하는 수술영상의 재생여부 및 요약수준을 결정하고, 결정된 재생여부 및 요약수준에 따라 이벤트에 대응하는 수술영상을 재생할 수 있다. In addition, the computer can determine the importance of each recognized event. The computer can control the reproduction of the surgical image corresponding to the event based on the determined degree of importance. For example, the computer may determine whether to reproduce the surgical image corresponding to the event and the summary level, and may reproduce the surgical image corresponding to the event according to the determined replayability and summary level.
이외에도, 개시된 실시 예에 따른 수술영상 재생제어 방법은, 수술영상이 하나 이상의 단계로 분할되고, 각각의 단계가 하나 이상의 그룹으로 그룹핑된 상황에도 적용될 수 있다. In addition, the surgical image regeneration control method according to the disclosed embodiment can be applied to a situation in which a surgical image is divided into one or more stages, and each stage is grouped into one or more groups.
컴퓨터는 그룹핑된 각각의 그룹에 대한 중요도를 판단하고, 상대적으로 중요도가 떨어지는 그룹에 대응하는 수술영상은 생략되거나 많이 요약될 수 있다. 마찬가지로, 상대적으로 중요도가 높은 그룹에 대응하는 수술영상은 요약되지 않거나, 적게 요약될 수 있다.The computer determines the importance of each of the grouped groups, and the surgical images corresponding to the group of relatively low importance may be omitted or many summaries can be summarized. Similarly, surgical images corresponding to relatively high importance groups may not be summarized or may be summarized less.
일 실시 예에서, 컴퓨터는 수술영상을 재생할 시간에 대한 정보를 획득하고, 획득된 시간 내에 수술영상이 모두 재생될 수 있도록 각각의 수술영상에 대한 요약수준을 결정할 수 있다. 예를 들어, 상대적으로 짧은 시간 내에 수술영상을 재생해야 하는 경우, 중요도에 따라 더 많이 생략하고, 더 많이 요약할 수 있다.In one embodiment, the computer may obtain information about the time to reproduce the surgical image and determine the level of abstraction for each surgical image such that all of the surgical images may be reproduced within the acquired time. For example, if a surgical image needs to be reproduced in a relatively short time, it may be omitted and further summarized depending on the importance.
일 실시 예에서, 컴퓨터는 사용자로부터 시청을 원하는 수술단계에 대한 정보를 획득할 수 있다. 이 경우, 컴퓨터는 획득된 정보에 대응하는 수술단계를 요약하지 않거나, 적게 요약하여 재생할 수 있다. In one embodiment, the computer may obtain information about a surgical step desired to be viewed from a user. In this case, the computer does not summarize the surgical steps corresponding to the acquired information, or can reproduce with less summary.
일 실시 예에서, 컴퓨터는 실시간으로 자료구조를 탐색하여, 각 단계에 대한 중요도를 판단하고, 재생여부 및 요약수준을 결정할 수 있다. 컴퓨터는 특정 수술단계에 대하여 중요도가 낮다고 판단하여, 생략하거나 많이 요약할 것을 결정할 수 있다. 이때 사용자로부터 선택 입력이 수신되면, 컴퓨터는 수술단계를 추가적으로 분할하여 해당 수술단계를 더 자세하게 표시할 수 있다.In one embodiment, the computer can search the data structure in real time, determine the importance for each step, and determine whether to play and summary levels. The computer may determine that it is less important for a particular surgical step and may decide to omit or summarize it a lot. At this time, when the selection input is received from the user, the computer may further divide the surgical stage to display the surgical stage in more detail.
도 6은 일 실시 예에 따른 장치(100)의 구성도이다.6 is a configuration diagram of an apparatus 100 according to an embodiment.
프로세서(102)는 하나 이상의 코어(core, 미도시) 및 그래픽 처리부(미도시) 및/또는 다른 구성 요소와 신호를 송수신하는 연결 통로(예를 들어, 버스(bus) 등)를 포함할 수 있다.The processor 102 may include one or more cores (not shown) and a connection path (e.g., a bus, etc.) to transmit and receive signals to and / or from a graphics processing unit (not shown) .
일 실시예에 따른 프로세서(102)는 메모리(104)에 저장된 하나 이상의 인스트럭션을 실행함으로써, 도 1 내지 도 8과 관련하여 설명된 학습용 데이터 관리방법을 수행한다.The processor 102 in accordance with one embodiment performs one or more instructions stored in the memory 104 to perform the training data management method described with reference to Figures 1-8.
예를 들어, 프로세서(102)는 메모리에 저장된 하나 이상의 인스트럭션을 실행함으로써 수술영상을 획득하고, 상기 수술영상을 하나 이상의 영역으로 분할한 정보를 획득하고, 상기 분할된 하나 이상의 영역 각각에 대응하는 수술단계에 대한 정보를 획득하고, 상기 수술단계 각각에 대한 중요도를 판단하고, 상기 판단된 중요도에 기초하여 상기 수술영상의 재생을 제어할 수 있다.For example, the processor 102 may obtain an operation image by executing one or more instructions stored in a memory, obtain information obtained by dividing the operation image into one or more regions, Acquiring information about the step, determining importance for each of the operation steps, and controlling reproduction of the operation image based on the determined importance.
한편, 프로세서(102)는 프로세서(102) 내부에서 처리되는 신호(또는, 데이터)를 일시적 및/또는 영구적으로 저장하는 램(RAM: Random Access Memory, 미도시) 및 롬(ROM: Read-Only Memory, 미도시)을 더 포함할 수 있다. 또한, 프로세서(102)는 그래픽 처리부, 램 및 롬 중 적어도 하나를 포함하는 시스템온칩(SoC: system on chip) 형태로 구현될 수 있다. The processor 102 may include a random access memory (RAM) (not shown) and a read-only memory (ROM) for temporarily and / or permanently storing signals (or data) , Not shown). In addition, the processor 102 may be implemented as a system-on-chip (SoC) including at least one of a graphics processing unit, a RAM, and a ROM.
메모리(104)에는 프로세서(102)의 처리 및 제어를 위한 프로그램들(하나 이상의 인스트럭션들)을 저장할 수 있다. 메모리(104)에 저장된 프로그램들은 기능에 따라 복수 개의 모듈들로 구분될 수 있다.The memory 104 may store programs (one or more instructions) for processing and control of the processor 102. Programs stored in the memory 104 may be divided into a plurality of modules according to functions.
이상에서 전술한 본 발명의 일 실시예에 따른 학습용 데이터 관리방법은, 하드웨어인 컴퓨터와 결합되어 실행되기 위해 프로그램(또는 어플리케이션)으로 구현되어 매체에 저장될 수 있다.As described above, the learning data management method according to an embodiment of the present invention can be implemented as a program (or an application) to be executed in combination with a computer, which is hardware, and can be stored in a medium.
상기 전술한 프로그램은, 상기 컴퓨터가 프로그램을 읽어 들여 프로그램으로 구현된 상기 방법들을 실행시키기 위하여, 상기 컴퓨터의 프로세서(CPU)가 상기 컴퓨터의 장치 인터페이스를 통해 읽힐 수 있는 C, C++, JAVA, 기계어 등의 컴퓨터 언어로 코드화된 코드(Code)를 포함할 수 있다. 이러한 코드는 상기 방법들을 실행하는 필요한 기능들을 정의한 함수 등과 관련된 기능적인 코드(Functional Code)를 포함할 수 있고, 상기 기능들을 상기 컴퓨터의 프로세서가 소정의 절차대로 실행시키는데 필요한 실행 절차 관련 제어 코드를 포함할 수 있다. 또한, 이러한 코드는 상기 기능들을 상기 컴퓨터의 프로세서가 실행시키는데 필요한 추가 정보나 미디어가 상기 컴퓨터의 내부 또는 외부 메모리의 어느 위치(주소 번지)에서 참조되어야 하는지에 대한 메모리 참조관련 코드를 더 포함할 수 있다. 또한, 상기 컴퓨터의 프로세서가 상기 기능들을 실행시키기 위하여 원격(Remote)에 있는 어떠한 다른 컴퓨터나 서버 등과 통신이 필요한 경우, 코드는 상기 컴퓨터의 통신 모듈을 이용하여 원격에 있는 어떠한 다른 컴퓨터나 서버 등과 어떻게 통신해야 하는지, 통신 시 어떠한 정보나 미디어를 송수신해야 하는지 등에 대한 통신 관련 코드를 더 포함할 수 있다. The above-described program may be stored in a computer-readable medium such as C, C ++, JAVA, machine language, or the like that can be read by the processor (CPU) of the computer through the device interface of the computer, And may include a code encoded in a computer language of the computer. Such code may include a functional code related to a function or the like that defines necessary functions for executing the above methods, and includes a control code related to an execution procedure necessary for the processor of the computer to execute the functions in a predetermined procedure can do. Further, such code may further include memory reference related code as to whether the additional information or media needed to cause the processor of the computer to execute the functions should be referred to at any location (address) of the internal or external memory of the computer have. Also, when the processor of the computer needs to communicate with any other computer or server that is remote to execute the functions, the code may be communicated to any other computer or server remotely using the communication module of the computer A communication-related code for determining whether to communicate, what information or media should be transmitted or received during communication, and the like.
상기 저장되는 매체는, 레지스터, 캐쉬, 메모리 등과 같이 짧은 순간 동안 데이터를 저장하는 매체가 아니라 반영구적으로 데이터를 저장하며, 기기에 의해 판독(reading)이 가능한 매체를 의미한다. 구체적으로는, 상기 저장되는 매체의 예로는 ROM, RAM, CD-ROM, 자기 테이프, 플로피디스크, 광 데이터 저장장치 등이 있지만, 이에 제한되지 않는다. 즉, 상기 프로그램은 상기 컴퓨터가 접속할 수 있는 다양한 서버 상의 다양한 기록매체 또는 사용자의 상기 컴퓨터상의 다양한 기록매체에 저장될 수 있다. 또한, 상기 매체는 네트워크로 연결된 컴퓨터 시스템에 분산되어, 분산방식으로 컴퓨터가 읽을 수 있는 코드가 저장될 수 있다.The medium to be stored is not a medium for storing data for a short time such as a register, a cache, a memory, etc., but means a medium that semi-permanently stores data and is capable of being read by a device. Specifically, examples of the medium to be stored include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, but are not limited thereto. That is, the program may be stored in various recording media on various servers to which the computer can access, or on various recording media on the user's computer. In addition, the medium may be distributed to a network-connected computer system so that computer-readable codes may be stored in a distributed manner.
본 발명의 실시예와 관련하여 설명된 방법 또는 알고리즘의 단계들은 하드웨어로 직접 구현되거나, 하드웨어에 의해 실행되는 소프트웨어 모듈로 구현되거나, 또는 이들의 결합에 의해 구현될 수 있다. 소프트웨어 모듈은 RAM(Random Access Memory), ROM(Read Only Memory), EPROM(Erasable Programmable ROM), EEPROM(Electrically Erasable Programmable ROM), 플래시 메모리(Flash Memory), 하드 디스크, 착탈형 디스크, CD-ROM, 또는 본 발명이 속하는 기술 분야에서 잘 알려진 임의의 형태의 컴퓨터 판독가능 기록매체에 상주할 수도 있다.The steps of a method or algorithm described in connection with the embodiments of the present invention may be embodied directly in hardware, in software modules executed in hardware, or in a combination of both. The software module may be a random access memory (RAM), a read only memory (ROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a flash memory, a hard disk, a removable disk, a CD- May reside in any form of computer readable recording medium known in the art to which the invention pertains.
이상, 첨부된 도면을 참조로 하여 본 발명의 실시예를 설명하였지만, 본 발명이 속하는 기술분야의 통상의 기술자는 본 발명이 그 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 실시될 수 있다는 것을 이해할 수 있을 것이다. 그러므로, 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며, 제한적이 아닌 것으로 이해해야만 한다. While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, You will understand. Therefore, it should be understood that the above-described embodiments are illustrative in all aspects and not restrictive.
[부호의 설명][Description of Symbols]
100: 장치100: Device
102: 프로세서102: Processor
104: 메모리104: Memory

Claims (10)

  1. 컴퓨터가 수술영상을 획득하는 단계;Obtaining a surgical image by a computer;
    상기 수술영상을 하나 이상의 영역으로 분할한 정보를 획득하는 단계;Obtaining information obtained by dividing the surgical image into one or more regions;
    상기 분할된 하나 이상의 영역 각각에 대응하는 수술단계에 대한 정보를 획득하는 단계;Obtaining information on a surgical stage corresponding to each of the divided one or more regions;
    상기 수술단계 각각에 대한 중요도를 판단하는 단계; 및Determining a degree of importance for each of the surgical steps; And
    상기 판단된 중요도에 기초하여 상기 수술영상의 재생을 제어하는 단계; 를 포함하는, 수술영상 재생제어 방법.Controlling reproduction of the surgical image based on the determined importance; And a control unit operable to control the operation of the surgical image.
  2. 제1 항에 있어서,The method according to claim 1,
    상기 수술단계에 대한 정보를 획득하는 단계는,Wherein the step of acquiring information on the surgical step comprises:
    상기 수술의 종류를 판단하는 단계; 를 포함하고,Determining a type of the operation; Lt; / RTI >
    상기 중요도를 판단하는 단계는,Wherein the step of determining the importance comprises:
    상기 수술의 종류에 기초하여 상기 수술단계 각각의 중요도를 판단하는 단계; 를 포함하는, 수술영상 재생제어 방법.Determining the importance of each of the surgical steps based on the type of surgery; And a control unit operable to control the operation of the surgical image.
  3. 제1 항에 있어서,The method according to claim 1,
    상기 수술영상의 재생을 제어하는 단계는,Wherein the step of controlling the reproduction of the surgical image comprises:
    상기 수술단계 각각의 중요도에 기초하여 상기 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정하는 단계; 를 포함하는, 수술영상 재생제어 방법.Determining whether to reproduce an operation image corresponding to each of the operation steps and a summary level based on the importance of each of the operation steps; And a control unit operable to control the operation of the surgical image.
  4. 제3 항에 있어서,The method of claim 3,
    상기 수술영상의 재생을 제어하는 단계는,Wherein the step of controlling the reproduction of the surgical image comprises:
    상기 수술단계 각각의 중요도에 기초하여 상기 수술단계 각각의 분할수준을 결정하는 단계; Determining a division level of each of the surgical steps based on the importance of each of the surgical steps;
    상기 결정된 분할수준에 따라 상기 수술단계 각각을 분할하는 단계; 및Dividing each of the surgical steps according to the determined division level; And
    상기 분할된 수술단계 각각에 대응하는 수술영상을 재생하되, 상기 분할된 수술단계 각각의 분할수준에 기초하여 상기 분할된 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정하는 단계; 를 포함하는, 수술영상 재생제어 방법.Determining whether to reproduce the surgical image corresponding to each of the divided surgical steps and the summary level based on the divided level of each of the divided surgical steps, while reproducing the surgical image corresponding to each of the divided surgical steps; And a control unit operable to control the operation of the surgical image.
  5. 제1 항에 있어서, The method according to claim 1,
    상기 분할한 정보를 획득하는 단계는, Wherein the step of acquiring the divided information comprises:
    상기 수술영상을 하나 이상의 분류단위로 계층적으로 분할한 정보를 획득하는 단계를 포함하고,And obtaining information obtained by hierarchically dividing the surgical image into one or more classification units,
    상기 수술단계에 대한 정보를 획득하는 단계는,Wherein the step of acquiring information on the surgical step comprises:
    상기 계층적으로 분할한 정보를 탐색하되, 상위 계층으로부터 상기 계층적으로 분할한 정보를 탐색하는, 단계; 및Searching for hierarchically divided information, searching for hierarchically divided information from an upper layer; And
    상기 계층적으로 분할한 정보 각각에 대응하는 수술단계에 대한 정보를 획득하는 단계; 를 포함하는, 수술영상 재생제어 방법.Obtaining information on a surgical stage corresponding to each of the hierarchically divided information; And a control unit operable to control the operation of the surgical image.
  6. 제5 항에 있어서, 6. The method of claim 5,
    상기 중요도를 판단하는 단계는,Wherein the step of determining the importance comprises:
    상기 계층적으로 분할한 정보 각각에 대응하는 수술단계에 대한 중요도를 판단하는 단계; 를 포함하고,Determining importance of a surgical stage corresponding to each of the hierarchically divided information; Lt; / RTI >
    상기 수술영상의 재생을 제어하는 단계는, Wherein the step of controlling the reproduction of the surgical image comprises:
    상기 수술단계 각각의 중요도에 기초하여 상기 수술단계 각각의 계층적 분할여부를 결정하는 단계; 및Determining whether each of the operation steps is hierarchically divided based on the importance of each of the operation steps; And
    상기 계층적으로 분할된 수술단계 각각의 중요도에 기초하여 상기 계층적으로 분할된 수술단계 각각에 대응하는 수술영상의 재생여부 및 요약수준을 결정하는 단계; 를 포함하는, 수술영상 재생제어 방법.Determining whether to reproduce the surgical image corresponding to each of the hierarchically divided surgical steps and the summary level based on the importance of each of the hierarchically divided surgical steps; And a control unit operable to control the operation of the surgical image.
  7. 제1 항에 있어서,The method according to claim 1,
    상기 수술영상에 포함된 적어도 하나의 이벤트를 인식하는 단계; 및Recognizing at least one event included in the surgical image; And
    상기 이벤트에 대응하는 수술영상을 재생하는 단계; 를 더 포함하는, 수술영상 재생제어 방법.Reproducing an operation image corresponding to the event; Further comprising the steps of:
  8. 제7 항에 있어서,8. The method of claim 7,
    상기 이벤트에 대응하는 수술영상을 재생하는 단계는, Wherein the step of reproducing the surgical image corresponding to the event comprises:
    상기 인식된 이벤트의 중요도를 판단하는 단계; 및Determining the importance of the recognized event; And
    상기 이벤트의 중요도에 기초하여 상기 이벤트에 대응하는 수술영상의 재생을 제어하는 단계; 를 포함하는, 수술영상 재생제어 방법.Controlling the reproduction of the surgical image corresponding to the event based on the importance of the event; And a control unit operable to control the operation of the surgical image.
  9. 하나 이상의 인스트럭션을 저장하는 메모리; 및A memory for storing one or more instructions; And
    상기 메모리에 저장된 상기 하나 이상의 인스트럭션을 실행하는 프로세서를 포함하고,And a processor executing the one or more instructions stored in the memory,
    상기 프로세서는 상기 하나 이상의 인스트럭션을 실행함으로써, The processor executing the one or more instructions,
    컴퓨터가 수술영상을 획득하는 단계;Obtaining a surgical image by a computer;
    상기 수술영상을 하나 이상의 영역으로 분할한 정보를 획득하는 단계;Obtaining information obtained by dividing the surgical image into one or more regions;
    상기 분할된 하나 이상의 영역 각각에 대응하는 수술단계에 대한 정보를 획득하는 단계;Obtaining information on a surgical stage corresponding to each of the divided one or more regions;
    상기 수술단계 각각에 대한 중요도를 판단하는 단계; 및Determining a degree of importance for each of the surgical steps; And
    상기 판단된 중요도에 기초하여 상기 수술영상의 재생을 제어하는 단계; 를 수행하는, 수술영상 재생제어 장치.Controlling reproduction of the surgical image based on the determined importance; And a control unit for controlling the operation of the surgical video regeneration controller.
  10. 하드웨어인 컴퓨터와 결합되어, 제1 항의 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터프로그램.A computer program stored in a computer readable recording medium in combination with a computer which is hardware and which is capable of performing the method of claim 1. Description:
PCT/KR2018/010334 2017-12-28 2018-09-05 Method, apparatus, and program for surgical image playback control WO2019132169A1 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
KR10-2017-0182900 2017-12-28
KR20170182900 2017-12-28
KR10-2017-0182899 2017-12-28
KR20170182898 2017-12-28
KR20170182899 2017-12-28
KR10-2017-0182898 2017-12-28
KR10-2018-0026574 2018-03-06
KR1020180026574A KR101880246B1 (en) 2017-12-28 2018-03-06 Method, apparatus and program for controlling surgical image play

Publications (1)

Publication Number Publication Date
WO2019132169A1 true WO2019132169A1 (en) 2019-07-04

Family

ID=63058435

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/010334 WO2019132169A1 (en) 2017-12-28 2018-09-05 Method, apparatus, and program for surgical image playback control

Country Status (2)

Country Link
KR (4) KR101880246B1 (en)
WO (1) WO2019132169A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102172258B1 (en) * 2018-12-05 2020-10-30 쓰리디메디비젼 주식회사 Surgical video procution system and surgical video procution method
KR102361219B1 (en) * 2019-09-09 2022-02-11 (주)미래컴퍼니 Method and apparatus for obtaining surgical data in units of sub blocks
KR102315212B1 (en) * 2019-10-14 2021-10-20 경상국립대학교산학협력단 Hook device for removal of broken intramedullary nail
KR102180921B1 (en) * 2019-10-18 2020-11-19 주식회사 엠티이지 Apparatus and method for inserting annotation on surgery video based on artificial intelligence
CN110840534B (en) * 2019-12-19 2022-05-17 上海钛米机器人科技有限公司 Puncture speed planning method and device, puncture equipment and computer storage medium
KR20210130041A (en) * 2020-04-21 2021-10-29 사회복지법인 삼성생명공익재단 System for providing educational information of surgical techniques and skills and surgical guide system based on machine learning using 3 dimensional image
KR102426925B1 (en) * 2020-06-23 2022-07-29 (주)휴톰 Method and program for acquiring motion information of a surgical robot using 3d simulation
KR102407531B1 (en) * 2020-08-05 2022-06-10 주식회사 라온메디 Apparatus and method for tooth segmentation
KR102427171B1 (en) * 2020-09-14 2022-07-29 (주)휴톰 Method and Apparatus for providing object labeling within Video
KR102619729B1 (en) * 2020-11-20 2023-12-28 서울대학교산학협력단 Apparatus and method for generating clinical record data
CN112891685B (en) * 2021-01-14 2022-07-01 四川大学华西医院 Method and system for intelligently detecting position of blood vessel
KR102640314B1 (en) * 2021-07-12 2024-02-23 (주)휴톰 Artificial intelligence surgery system amd method for controlling the same
CN113616336B (en) * 2021-09-13 2023-04-14 上海微创微航机器人有限公司 Surgical robot simulation system, simulation method, and readable storage medium
KR102405647B1 (en) * 2022-03-15 2022-06-08 헬리오센 주식회사 Space function system using 3-dimensional point cloud data and mesh data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120046439A (en) * 2010-11-02 2012-05-10 서울대학교병원 (분사무소) Method of operation simulation and automatic operation device using 3d modelling
KR101175065B1 (en) * 2011-11-04 2012-10-12 주식회사 아폴로엠 Method for bleeding scanning during operation using image processing apparatus for surgery
KR20120126679A (en) * 2011-05-12 2012-11-21 주식회사 이턴 Control method of surgical robot system, recording medium thereof, and surgical robot system
KR101302595B1 (en) * 2012-07-03 2013-08-30 한국과학기술연구원 System and method for predict to surgery progress step
KR20160096868A (en) * 2015-02-06 2016-08-17 경희대학교 산학협력단 Apparatus for generating guide for surgery design information and method of the same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009237923A (en) * 2008-03-27 2009-10-15 Nec Corp Learning method and system
JP2010092266A (en) * 2008-10-08 2010-04-22 Nec Corp Learning device, learning method and program
KR102239714B1 (en) * 2014-07-24 2021-04-13 삼성전자주식회사 Neural network training method and apparatus, data processing apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120046439A (en) * 2010-11-02 2012-05-10 서울대학교병원 (분사무소) Method of operation simulation and automatic operation device using 3d modelling
KR20120126679A (en) * 2011-05-12 2012-11-21 주식회사 이턴 Control method of surgical robot system, recording medium thereof, and surgical robot system
KR101175065B1 (en) * 2011-11-04 2012-10-12 주식회사 아폴로엠 Method for bleeding scanning during operation using image processing apparatus for surgery
KR101302595B1 (en) * 2012-07-03 2013-08-30 한국과학기술연구원 System and method for predict to surgery progress step
KR20160096868A (en) * 2015-02-06 2016-08-17 경희대학교 산학협력단 Apparatus for generating guide for surgery design information and method of the same

Also Published As

Publication number Publication date
KR101880246B1 (en) 2018-07-19
KR20190080703A (en) 2019-07-08
KR102298412B1 (en) 2021-09-06
KR20190080702A (en) 2019-07-08
KR20190088375A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
WO2019132169A1 (en) Method, apparatus, and program for surgical image playback control
WO2019132168A1 (en) System for learning surgical image data
KR102014385B1 (en) Method and apparatus for learning surgical image and recognizing surgical action based on learning
WO2019132614A1 (en) Surgical image segmentation method and apparatus
WO2019132165A1 (en) Method and program for providing feedback on surgical outcome
US20220108450A1 (en) Surgical simulator providing labeled data
US20230289474A1 (en) Method and system for anonymizing raw surgical procedure videos
WO2019235828A1 (en) Two-face disease diagnosis system and method thereof
WO2019231104A1 (en) Method for classifying images by using deep neural network and apparatus using same
WO2020067632A1 (en) Method, apparatus, and program for sampling learning target frame image of video for ai image learning, and method of same image learning
WO2019132244A1 (en) Method for generating surgical simulation information and program
WO2021206518A1 (en) Method and system for analyzing surgical procedure after surgery
KR102276862B1 (en) Method, apparatus and program for controlling surgical image play
WO2022108387A1 (en) Method and device for generating clinical record data
WO2019164273A1 (en) Method and device for predicting surgery time on basis of surgery image
WO2020159276A1 (en) Surgical analysis apparatus, and system, method, and program for analyzing and recognizing surgical image
WO2023287077A1 (en) Artificial intelligence surgical system and control method therefor
WO2021261727A1 (en) Capsule endoscopy image reading system and method
WO2021246648A1 (en) Method and device for processing blood vessel image on basis of user input
WO2021206517A1 (en) Intraoperative vascular navigation method and system
WO2021015490A2 (en) Method and device for analyzing specific area of image
WO2019164278A1 (en) Method and device for providing surgical information using surgical image
CN116787444A (en) Mechanical arm simulation system and robot simulation system
WO2019164279A1 (en) Method and apparatus for evaluating recognition level of surgical image
JP2000081908A (en) Plant monitor and control system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18896849

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18896849

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/01/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18896849

Country of ref document: EP

Kind code of ref document: A1