WO2023022258A1 - Image information-based laparoscope robot artificial intelligence surgery guide system - Google Patents

Image information-based laparoscope robot artificial intelligence surgery guide system Download PDF

Info

Publication number
WO2023022258A1
WO2023022258A1 PCT/KR2021/011021 KR2021011021W WO2023022258A1 WO 2023022258 A1 WO2023022258 A1 WO 2023022258A1 KR 2021011021 W KR2021011021 W KR 2021011021W WO 2023022258 A1 WO2023022258 A1 WO 2023022258A1
Authority
WO
WIPO (PCT)
Prior art keywords
surgical
data
robot
surgery
tool
Prior art date
Application number
PCT/KR2021/011021
Other languages
French (fr)
Korean (ko)
Inventor
황희선
노경석
김정준
김종찬
박지현
공성호
Original Assignee
한국로봇융합연구원
서울대학교병원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국로봇융합연구원, 서울대학교병원 filed Critical 한국로봇융합연구원
Publication of WO2023022258A1 publication Critical patent/WO2023022258A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/50Supports for surgical instruments, e.g. articulated arms
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B2017/00017Electrical control of surgical instruments
    • A61B2017/00115Electrical control of surgical instruments with audible or visual output
    • A61B2017/00119Electrical control of surgical instruments with audible or visual output alarm; indicating an abnormal situation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B2017/00017Electrical control of surgical instruments
    • A61B2017/00203Electrical control of surgical instruments with speech control or speech recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • A61B2034/252User interfaces for surgical systems indicating steps of a surgical procedure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • A61B2034/256User interfaces for surgical systems having a database of accessory information, e.g. including context sensitive help or scientific articles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/50Supports for surgical instruments, e.g. articulated arms
    • A61B2090/506Supports for surgical instruments, e.g. articulated arms using a parallelogram linkage, e.g. panthograph
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2560/00Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
    • A61B2560/04Constructional details of apparatus
    • A61B2560/0487Special user inputs or interfaces
    • A61B2560/0493Special user inputs or interfaces controlled by voice

Definitions

  • the present invention relates to an image information-based laparoscopic robot artificial intelligence surgical guide system and guide method.
  • surgery refers to repairing a disease by cutting, cutting, or manipulating skin, mucous membranes, or other tissues using medical machines.
  • open surgery which incises and opens the skin at the surgical site and treats, molds, or removes internal organs, etc., has recently been using robots due to problems such as bleeding, side effects, patient pain, and scars.
  • Surgery is emerging as an alternative.
  • a surgical robot refers to a robot having a function that can substitute for a surgical operation performed by a surgeon. Compared to humans, these surgical robots have the advantage of being able to perform more accurate and precise movements and enabling remote surgery.
  • Surgical robots currently being developed worldwide include bone surgery robots, laparoscopic surgery robots, and stereotactic surgery robots.
  • a surgical robot device is generally composed of a master console and a slave robot.
  • a control lever for example, a handle
  • an instrument coupled to the robot arm of the slave robot or held by the robot arm is manipulated to perform surgery.
  • the surgical robot is provided with a robot arm for manipulation for surgery, and an instrument is mounted on a front end of the robot arm.
  • an instrument is mounted on the front end of the robot arm to perform surgery, the instrument moves along with the movement of the robot arm, which means that the patient's skin is partially punctured and the instrument is inserted thereto to perform surgery.
  • the surgical area is wide, there is a concern that the advantage of robot surgery may be halved, such as incision of the skin as much as the path the instrument moves or perforation of the skin for each surgical area.
  • the instrument mounted on the front end of the robot arm sets a virtual rotation center point at a predetermined position at the distal end and controls the robot arm so that the instrument rotates around this point.
  • This virtual center point is referred to as a 'remote center' or ' It is called RCM (remote center of motion).
  • the present invention has been devised to solve the above conventional problems, and according to an embodiment of the present invention, while mapping the surgical image data to be a guide and the current captured image data, there is a missing in the surgical sequence or a change of more than a threshold value. Its purpose is to provide an image information-based laparoscopic robot artificial intelligence surgery guide system that can be transmitted as a notification signal or controlled to capture an image of the location where the corresponding event exists immediately before surgery is completed.
  • the present invention it is possible to memorize the location of a tool (gauze, mass, etc.) inserted into the affected area during surgery and automatically photograph the location right before the surgery is completed to determine whether the tool has been removed or not. Its purpose is to provide an image information-based laparoscopic robot artificial intelligence surgery guide system that can be controlled to photograph the location immediately before surgery is completed if it is not removed by judgment.
  • the present invention it is possible to store a momentary robot posture to be memorized through a voice command or a foot pedal, and to establish a movement plan to move to show the memorized momentary image according to a user's request later.
  • the purpose is to provide an image information-based laparoscopic robot artificial intelligence surgical guide system.
  • a situation in which movement to a target point is impossible due to a changed environment (organ movement) during surgery and the current surgical tool position, etc., an object that becomes an obstacle during movement is displayed on the screen, and kinematic characteristics of the laparoscopic robot
  • an image information-based laparoscopic robot artificial intelligence surgery guide system that can display the optimal screen possible considering the There is a purpose.
  • An object of the present invention is to provide a system for guiding surgery by monitoring a surgical procedure based on image data captured by a laparoscopic camera of a laparoscopic camera holder robot, comprising: a data collection unit for collecting surgical image data; a surgical image learning DB that stores the surgical learning data by learning the collected surgical image data and classifying them by surgery type and operator; an image processing device that receives current surgical image data captured by the camera and exchanges data through communication in a non-real-time control area with a controller that controls driving of the holder robot; a guide monitoring unit generating surgical guide data by comparing the surgical learning data with the current surgical image data captured by the camera; And notification means for guiding the surgical guide data; it can be achieved as an image information-based laparoscopic robot artificial intelligence surgery guide system comprising a.
  • the surgical learning data may be characterized in that surgical sequence characteristics are learned for each surgical type and each operator.
  • the guide monitoring unit a comparison analysis unit for comparing and analyzing the current surgical image data and the surgical learning data in real time; and an event determination unit for determining an event as to whether a sequence is missing or whether a change in the current surgical image data compared to the surgical learning data exceeds a threshold value according to the comparative analysis by the comparison and analysis unit, wherein the notification means comprises: It may be characterized in that a notification signal is transmitted when the event occurs.
  • the controller may control driving of the holder robot to capture an image of a location where the event occurs.
  • the image processing device recognizes the surgical tool in the current surgical image data and identifies the location and type to perform non-real-time control with the controller.
  • data is exchanged in the area, and the surgical learning data learns the position and direction characteristics of surgical instruments according to the surgical sequence, and the comparison and determination unit position and direction characteristics of the surgical instruments according to the surgical sequence of the surgical learning data,
  • the surgical tools in the current surgical image data may be compared and analyzed, and the event determination unit may determine an event as to whether there is a change of more than a threshold value in the position and direction characteristics of the surgical tool according to the sequence.
  • the image processing device recognizes the tool to be removed in the current surgical image data to identify the location and type of the tool to be removed, and the controller and exchanges data in a non-real-time control area, and the controller, when the tool to be removed is recognized, controls driving of the holder robot to capture an image of the position of the tool to be removed right before surgery is completed.
  • the controller may further include a removal decision unit for determining whether the tool to be removed is removed when the tool to be removed is recognized, and the controller controls the tool to be removed when it is determined that the tool to be removed is not removed immediately before completion of the operation. It may be characterized in that driving of the holder robot is controlled to capture an image.
  • the control input may be a position command based on a display image, and the controller may control driving of the holder robot so that the position of the image is changed based on the voice control input.
  • the voice command processing device includes a voice command DB that learns characteristics of each person and is classified for each voice control command, recognizes a voice control command from the voice data, and exchanges data with the controller in a non-real-time control area. that can be characterized.
  • a robot posture storage unit for commanding to store the posture of the robot at a specific point in time or during a specific time range during surgery, and the controller drives the holder robot to switch to the stored robot posture according to a user's request. It can be characterized as controlling.
  • a tool that does not match the surgical tool DB and the removal target tool DB exists in the current surgical image data, it may be characterized by further comprising an obstacle recognizing unit that recognizes it as an obstacle and displays it in the surgical image data.
  • the image information-based laparoscopic robot artificial intelligence surgery guide system while mapping the surgical image data that serves as a guide and the current captured image data, if there is an omission in the surgical sequence or a change greater than the threshold value, it is sent as a notification signal. Alternatively, it has an effect of being able to control to capture an image of a location where a corresponding event exists right before surgery is completed.
  • the surgery is automatically completed to determine whether the tool is removed by memorizing the position of a tool (gauze, mass, etc.) inserted into the affected area during surgery. If it is not removed by determining whether or not the technology or control to shoot the corresponding position just before the surgery is completed, it has the effect of being able to control the photographing of the corresponding position immediately before the surgery is completed.
  • a tool gauge, mass, etc.
  • the momentary robot posture to be memorized through a voice command or foot pedal is stored, and the instantaneous image that is remembered according to the user's request later It has the effect of establishing a movement plan to move to show
  • the image information-based laparoscopic robot artificial intelligence surgery guide system it is possible to grasp the situation in which movement to the target point cannot be performed due to the changed environment (organ movement) and the current surgical tool position during surgery, and obstacles when moving to the screen.
  • This object can be displayed, the optimal screen can be displayed considering the kinematic characteristics of the laparoscopic robot, and the current position is automatically saved when a move command is given to the previous position, and then the move can be performed according to the return command to the previous position. .
  • FIG. 1 is a configuration diagram of a laparoscopic holder robot system having a laparoscopic mounting adapter and an RCM structure according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram showing a driving mechanism having an RCM structure according to an embodiment of the present invention
  • FIG. 3 is a side view of a laparoscopic holder robot having a laparoscopic mounting adapter and an RCM structure according to an embodiment of the present invention
  • Figure 4 is a side cross-sectional view of the laparoscopic adapter device attached to the laparoscope according to an embodiment of the present invention
  • FIG. 5 is a side cross-sectional view of a detachable unit according to an embodiment of the present invention.
  • FIG. 6 is a front view of a detachable unit according to an embodiment of the present invention.
  • FIG. 7 is a block diagram showing a control flow of a controller according to an embodiment of the present invention.
  • FIG. 8 is a diagram of a laparoscopic camera holder robot control system according to an embodiment of the present invention.
  • FIG. 9 is a block diagram of an external control device and a controller that are communicatively connected by an external interface according to an embodiment of the present invention.
  • FIG. 10 is a block diagram of a four-mode laparoscopic camera holder robot control system according to an embodiment of the present invention.
  • FIG. 11 is a flowchart of a method for controlling a laparoscopic camera holder robot according to an embodiment of the present invention
  • FIG. 12 is an example of a screen when a grid command is input in a voice command mode according to an embodiment of the present invention and a screen when a voice command is number 4;
  • FIG. 13 is a block diagram of a laparoscopic robot artificial intelligence surgery guide system based on image information according to an embodiment of the present invention
  • FIG. 14 is a block diagram of a data collection unit according to an embodiment of the present invention.
  • 15 is a block diagram of a learning DB according to an embodiment of the present invention.
  • FIG. 16 shows a block diagram of a guide monitoring unit according to an embodiment of the present invention.
  • FIG. 1 shows a configuration diagram of a laparoscopic holder robot system having a laparoscopic mounting adapter and an RCM structure according to an embodiment of the present invention.
  • the configuration and function of the laparoscopic holder robot having the laparoscopic mounting adapter and the RCM structure according to an embodiment of the present invention are described focusing on the driving mechanism, and second, the control system and control method for the laparoscopic holder robot are described. Then, thirdly, the method and system for monitoring the laparoscopic surgery process based on image information will be described.
  • FIG. 2 is a schematic diagram showing a driving mechanism having an RCM structure according to an embodiment of the present invention.
  • Figure 3 shows a side view of the laparoscopic holder robot having a laparoscopic mounting adapter and RCM structure according to an embodiment of the present invention.
  • Laparoscope holder robot 100 having a laparoscope mounting adapter and an RCM structure basically has a camera 2 at one end and a laparoscope 1 equipped with an image sensor 3 at the other end. rotates around a remote center of motion (RCM) point (4).
  • RCM remote center of motion
  • the laparoscopic holder robot 100 having a laparoscopic mounting adapter and an RCM structure generally includes a body 5, an RCM structure 10, a first rotation drive unit 20, and a second rotation drive unit 30. , Linear movement device 40, the laparoscopic adapter device 50 having a laparoscopic axial rotation device 60, and the like.
  • the laparoscopic holder robot 100 having the laparoscopic mounting adapter and the RCM structure according to an embodiment of the present invention has four degrees of freedom and enables the laparoscope 1 to be detached from the holder robot 100 by the laparoscopic adapter device 50. It consists of
  • the laparoscope 1 is tilted up and down based on the RCM point 4 on the laparoscope by the first rotation drive unit 20, and the RCM point 4 and the RCM point 4 by the second rotation drive unit 30. It is rotationally driven based on the imaginary axis connecting the first rotation joint 11, and the laparoscope 1 can be moved in the longitudinal direction through the linear moving device 40, and the laparoscope 1 can be moved in the longitudinal direction by the laparoscope axial rotation device 60. has four degrees of freedom to rotate about its longitudinal axis.
  • the RCM structure 10 has a rear end coupled on the first rotary joint 11 of the body, and a front end coupled to the laparoscope 1 side on the second rotary joint 16.
  • the first rotation driving unit 20 is provided in the body 5 and drives the RCM structure 10 to rotate with respect to the first rotation joint 11, so that the laparoscope 1 rotates around the RCM point 4 .
  • the RCM structure 10 has a structure in which the first link unit 12 and the second link unit 14 are combined.
  • the first rotation joint 11 is a 1-1 rotation joint 11-1, a 1-2 rotation joint 11-1 spaced apart from the 1-1 rotation joint 11-1 downward by a specific interval. 2)
  • the second rotation joint 16 is configured to include a 2-1 rotation joint 16-1 and a 2-2 rotation joint 16-2.
  • the first link unit 12 includes a 1-1 link 12-1 having one end connected to the 1-1 rotation joint 11-1, and a 1-1 link 12-1 having one end connected to the 1-1 rotation joint 11-1. ) is connected to the other end of the first hinge 13, and the other end connects the 1-2 link 12-1 connected to the linear movement device 40 through the 2-1 rotation joint 16-1. consists of including
  • the second link unit 14 includes a 2-1 link 14-1 having one end connected to the 1-2 rotation joint 11-2, and a 2-1 link 14-1 having one end connected to the 1-2 rotation joint 11-2.
  • the 2-2 link 140-2 connected to the other end of 1) by the second hinge 15 and the other end connected to the linear movement device 40 through the 2-2 rotation joint 16-2 have
  • the first link unit 12 and the second link unit 14 are hinged at a point where the 1-2 link 12-2 and the 2-1 link 14-1 intersect.
  • the second rotation drive unit 30 is installed on one side of the body 5 and drives the laparoscope 1 to rotate based on a virtual line connecting the RCM point 4 and the first rotation joint 11.
  • the RCM point 4 is a virtual line connecting the 1-1 rotation point 11-1 and the 1-2 rotation point 11-2 and the laparoscope 1 It can be seen that it is located at the point where it intersects. Therefore, the laparoscope 1 can be rotated with respect to the RCM point 4 by driving the first rotary driving unit through the RCM structure 10 .
  • the linear movement device 40 is connected to the second rotation joint 16 and is configured to move the laparoscope 1 in the longitudinal direction.
  • the specific configuration, means, and form of the linear movement device 40 are not limited as long as it can move the laparoscope 1 along the longitudinal direction of the laparoscope 1.
  • the laparoscopic adapter device is configured to attach and detach the linear movement device and the laparoscope.
  • Figure 4 shows a side cross-sectional view of the laparoscopic adapter device attached to the laparoscope according to an embodiment of the present invention.
  • 5 is a cross-sectional side view of a detachable unit according to an embodiment of the present invention
  • FIG. 6 is a front view of the detachable unit according to an embodiment of the present invention.
  • Figure 7 shows a block diagram showing the control flow of the controller according to an embodiment of the present invention.
  • the laparoscopic adapter device 50 includes a fastening means provided on one side of the upper portion and configured to be detachable from the linear movement device 40, and a detachable unit 70 configured to attach and detach the laparoscope 1. do.
  • This detachable unit 70 is configured to be replaceable according to the size of the diameter of the laparoscope.
  • the laparoscopic axial rotation device 60 may be detachably installed between the linear movement device 40 and the detachable unit 70. Through the laparoscope axial rotation device 60, the laparoscope can be rotated based on the longitudinal axis.
  • the present invention it is possible to solve the problem of having a different diameter of the laparoscope 1 and mount it on the robot 100 like a module. Since the diameter of the laparoscope 1 varies, it can be divided into 2-3 pieces and mounted, and the external size of the adapter 50 is kept constant so that it can be combined with the robot 100. Since there are cases where axial rotation is not required, it is configured to be equipped with a modular laparoscopic axial rotation device. Since the laparoscope 1 having an inclined angle can be mounted, it is configured to align the central axis, and the motor drive 62 and the electric motor 61 are installed together as one module.
  • the attachment/detachment unit 70 has a cylindrical inner surface, into which the laparoscope 1 is inserted, and a mounting portion 71 having an incision in the longitudinal direction and , It may be configured to include a cam clamping member 72 for fixing the laparoscope 1 by tightening the mounting portion 71 by manipulation.
  • the controller 200 controls the driving of the first rotary drive unit 20, the second rotation drive unit 30, the linear movement unit 40, and the axial rotation unit 60 of the laparoscope to position the end of the laparoscope 10. As will be described later by adjusting, the imaging position of the laparoscopic camera 2 is adjusted.
  • the adapter device 50 is coupled to the laparoscope axial rotation device 60 and is configured to detach only the laparoscope 1 itself.
  • the position recognition unit recognizes a longitudinal movement position of the laparoscope and a rotation angle position based on a longitudinal axis of the laparoscope from a reference position based on an angle criterion and a length criterion.
  • the controller 200 moves the position of the laparoscope at the point where the procedure is stopped from the reference position.
  • the driving of the linear moving device 40 and the laparoscope axial rotation device 60 are controlled to move the laparoscope 1 to the laparoscope position at the procedure stop point.
  • the longitudinal position of the laparoscope 1 should not be changed.
  • a reference line is made so that the light source mechanical part and the adapter device 50 already mounted on the laparoscope (1) match, and the angle part is matched, and the length of the laparoscope (1) is matched with the step part in the laparoscope to match the length. That is, after the laparoscope holder robot 100 is coupled to the laparoscope 1 and mounted in the robot system, the RCM point 4 and the state position are determined through a calibration process. After that, when it is detached, this calibration process is not needed.
  • FIG. 8 shows a block diagram of a laparoscopic camera holder robot control system according to an embodiment of the present invention.
  • 9 is a block diagram of an external control device and a controller that are communicatively connected by an external interface according to an embodiment of the present invention.
  • 10 also shows a block diagram of a four-mode laparoscopic camera holder robot control system according to an embodiment of the present invention.
  • the laparoscopic camera holder robot control system is a system for controlling the driving of the aforementioned laparoscopic camera holder robot 100 .
  • the controller 20 basically drives the holder robot 10 based on the control command signal, that is, the first rotation drive unit 20, the second rotation drive unit 30, and the linear movement device 40. , Controls the driving of the laparoscopic axial rotation device 60.
  • the controller 200 is divided into a real-time control area and a non-real-time control area, and is configured to mutually exchange data through internal communication.
  • the real-time controller 202 of the controller 200 receives a control signal from the attitude measurement unit and the first control input means in the real-time domain.
  • the top priority control input means is a means for inputting a top priority control command signal to the controller 200 by a user in a real-time control area, and may be composed of a foot pedal 110 in an embodiment of the present invention.
  • the user inputs a control signal to the controller 200 through manipulation of the foot pedal 110 .
  • the posture measurement unit is configured to measure the posture data of the holder robot 100 in the real-time control area and transmits the data to the controller 200.
  • the posture measurement unit may be configured with the IMU sensor 120.
  • the display unit 150 is configured to display image data captured by the laparoscopic camera 2 in real time.
  • the external control device 210 is communicatively connected to the external interface 201 provided in the controller 200, and adjusts the position of the end of the laparoscope 1 by means of an external adjustment input to adjust the position of the image displayed on the display unit 150. configured to adjust.
  • the external interface 201 operates in the non-real-time control area and exchanges data with the real-time control area of the controller through internal communication.
  • the external adjustment input is a position command based on the displayed image, and the controller 200 controls the driving of the holder robot 100 so that the position of the image is changed based on the external adjustment input.
  • the external control device 210 is a device for the operator (user) to move the camera holder robot 100 during surgery, and the laparoscope 1 indirectly attached to the camera holder robot 100 by controlling the position shown in the laparoscopic image. ) to control the end position to control the displayed image.
  • the connection with the external control device 210 is connected to the camera holder robot controller 200 through the "external interface 201" module in the camera holder robot controller 200, and the external interface 201 module uses various communication methods. Support (ADS, TCP/IP, Serial, etc.).
  • the external interface 201 module operates in a non-real-time area and exchanges data with the real-time control area of the camera holder robot controller 200 through internal communication.
  • the image processing device 130 is configured to receive image data captured by the camera 2 and exchange data with the controller 200 through communication in a non-real-time control area.
  • the image processing device 130 learns sample surgical tool images, includes a surgical tool learning DB classified by type of surgical tool, recognizes the surgical tool in the image data, identifies the location and type, and connects the controller 200 and non-real-time It is configured to exchange data in the control domain.
  • the voice command processing device 140 receives the voice data from the microphone 6 that receives voice data from the user, recognizes voice control commands, and exchanges data with the controller through communication in the non-real-time control area.
  • Voice control input is a position command based on the display image
  • the controller 200 controls the driving of the holder robot 100 so that the position of the image is changed based on the voice control input.
  • the voice command processing device 140 learns the characteristics of each person and includes a voice command DB classified for each voice control command, and recognizes the voice control command from the voice data so that the controller 200 and the data in the non-real-time control area It is configured to exchange
  • the laparoscopic camera holder robot control system when there is an inclination angle at the rear end of the laparoscope 1, a kinematic map between the inclination angle, the screen coordinate system of the image data, and the laparoscope end coordinate system. ), it is configured to include a laparoscopic inclination angle correction unit that calculates the movement of the laparoscopic end coordinate system according to the movement of the screen coordinate system based on the screen coordinate system.
  • a device capable of inputting an installed laparoscopic inclination angle is provided, and based on a kinematic map between the screen coordinate system and the laparoscopic end coordinate system, the movement of the laparoscopic end coordinate system according to the movement of the laparoscopic end coordinate system is calculated.
  • the robot motion for generating the laparoscopic distal coordinate system motion is generated.
  • Figure 11 shows a flow chart of a laparoscopic camera holder robot control method according to an embodiment of the present invention.
  • 12 shows an example of a screen when a grid command is input in the voice command mode according to an embodiment of the present invention and a screen when a voice command is number 4.
  • the laparoscopic camera holder robot control system can be operated in a manual operation mode, an external operation mode, a basic operation mode, and an automatic operation mode
  • the basic operation mode is a foot pedal and a voice command.
  • the control by the foot pedal is applied with the highest priority for other modes as a real-time control area.
  • the user adjusts the position of the holder robot 100 through the manual operation mode (S2, S3), Image data is displayed on the display unit 150 by the laparoscopic camera 2 .
  • the user can control the position of the robot 100 after pressing the “manual mode” button on the robot 100 or the controller 200.
  • the manual operation mode button is lit and the button is pressed again. When pressed, the light turns off and the manual operation mode is released.
  • the basic operation mode is executed, the posture data measured through the IMU sensor 120 in the real-time control area is input, and the user inputs a control signal through the foot pedal 110 and the controller 200 In the real-time control area, the driving of the holder robot 100 is controlled based on the foot pedal 110 input signal (S5).
  • the voice command processing device 140 recognizes a specific voice control command from the voice data, and the controller 200 operates the holder robot 100 based on the voice control command.
  • the driving of is controlled (S8).
  • the control input by the foot pedal takes precedence over the voice command.
  • the basic operation mode is a mode that basically operates when the power of the camera holder robot 100 is turned on, and the foot pedal 110 device directly connected to the real-time control area and the voice command connected to the non-real-time control area can be performed. .
  • the foot pedal command connected to the real-time area takes precedence.
  • the voice command processing device 140 receives voice data from the microphone 6 that receives voice data from the user, recognizes voice control commands, and transmits data through communication with the controller in a non-real-time control area. will exchange
  • the voice control input is a position command based on the display image
  • the controller 200 controls the driving of the holder robot 100 so that the position of the image is changed based on the voice control input.
  • the voice command processing device 140 learns the characteristics of each person and includes a voice command DB classified for each voice control command, and recognizes the voice control command from the voice data so that the controller 200 and the data in the non-real-time control area It is configured to exchange
  • the voice control command includes a grid voice command, and at the time of the grid voice command, the image data is divided into a plurality of pieces, an index is displayed in each divided area, and the user selects a specific index.
  • the controller 200 controls the driving of the holder robot 100 so that the specific index partition area becomes the entire screen.
  • the user issues a movement command through the wirelessly connected microphone 6, provides a learning algorithm that can learn the characteristics of each person, and easily transfers the learned result to the voice command processing device 140. be able to upload.
  • the voice command is based on the displayed image, and may be moved by up/down, right/left, near/far, right rotation/left rotation commands of the image.
  • it provides a voice command system definition (up, down, left, right, near, far, etc.) of fine-tuning movements according to the degree of freedom of laparoscope (1) movement and a scale calibration algorithm optimized for laparoscopic surgery.
  • screen movement i.e. robot movement speed, can be optimized to meet user requirements.
  • the external control device 210 is communicatively connected to the external interface 201 provided in the controller 200, and the controller 200 operates based on an external control input by the user. is to control the driving of the laparoscopic robot 100 (S11).
  • This external operation mode operates in the non-real-time control area, exchanges data with the real-time control area of the controller 200 through internal communication, the external control input is a position command based on the display image, and the controller 200 controls the driving of the holder robot 100 so that the position of the image is changed based on the external adjustment input.
  • control input position, speed
  • the robot 100 is controlled through the camera holder robot controller 200.
  • the external control device 210 must transmit data to the camera holder robot controller 200 according to a predetermined protocol, and related APIs are provided.
  • the image processing device 130 having a surgical tool learning DB classified by type of surgical tool by learning the sample surgical tool image recognizes the surgical tool in the image data
  • the location and type are identified and data is exchanged with the controller 200 in the non-real-time control area, and the controller 200 controls the drive 100 of the holder robot so that the recognized surgical tool is maintained within a specific area within the image data.
  • this automatic operation mode is executed only when the user designates the automatic operation mode, image data is input (S12), and the surgical tool is recognized by the image processing device 130 (S13).
  • the driving priority of the holder robot 100 is foot pedal 110 input, voice control command, external adjustment input, and automatic operation mode in order. am.
  • FIG. 13 is a block diagram of a laparoscopic robot artificial intelligence surgery guide system based on image information according to an embodiment of the present invention.
  • 14 is a block diagram of a data collection unit according to an embodiment of the present invention
  • FIG. 15 is a block diagram of a learning DB according to an embodiment of the present invention
  • FIG. 16 is a block diagram of a guide monitoring unit according to an embodiment of the present invention. It did
  • the image information-based laparoscopic robot artificial intelligence surgery guide system can guide surgery by monitoring the surgical process based on image data captured by the laparoscopic camera 1 of the laparoscopic camera holder robot 100. It is a system that has
  • the image information-based laparoscopic robot artificial intelligence surgery guide system according to an embodiment of the present invention, in the aforementioned control system, data collection unit, data learning unit, learning DB, guide monitoring unit, notification It can be seen that it is configured to further include means and the like.
  • the data collection unit is configured to collect surgical image data 311 , sample surgical tool image 312 , sample removal target tool image 313 , and audio data 314 .
  • the data learning unit learns the collected surgical image data 311, sample surgical tool image 312, sample removal target tool image 313, and audio data 314, and each of the learned data is of the learning DB 330. It is stored in the surgical image learning DB 331, the surgical tool learning DB 332, the removal target tool learning DB 333, and the voice command DB 334.
  • the image processing device 130 receives the current surgical image data captured by the camera and exchanges data with the controller 200 that controls the driving of the holder robot 100 through communication in a non-real-time control area.
  • the data learning unit is configured to learn the collected surgical image data 311, classify the surgical image data by surgery type and operator, and store the surgical learning data in the surgical image learning DB.
  • the surgical learning data the surgical sequence characteristics are learned by surgery type and by operator.
  • the data learning unit 320 learns the sample surgical instrument images 312, classifies them by surgical instrument type, and stores them in the surgical instrument learning DB 332.
  • the image processing device 130 is configured to exchange data with the controller 200 in a non-real-time control area by recognizing a surgical tool in the current surgical image data to determine the location and type.
  • the surgical learning data can learn the characteristics of the position and direction of the surgical tool according to the surgical sequence.
  • the data learning unit 320 learns the sample removal target tool images 313, classifies them according to removal target tool types, and stores them in the removal target tool learning DB 333.
  • the image processing device 130 recognizes the tool to be removed in the current surgical image data, identifies the location and type, and exchanges data with the controller 200 in the non-real-time control area.
  • the data learning unit 320 learns the characteristics of each person from the voice data, classifies them according to voice control commands, and stores them in the voice command DB 334.
  • the voice command processing device 140 recognizes a voice control command from voice data and exchanges data with the controller 200 in a non-real-time control area.
  • the guide monitoring unit 350 is basically configured to generate surgical guide data by comparing the surgical learning data stored in the learning DB 330 with the current surgical image data captured by the camera, and the notification means 340 It is configured to guide the surgical guide data.
  • the guide monitoring unit 350 includes a search engine 351 that detects surgical learning data to be compared and analyzed based on the current surgical image data, and the current surgical image data and surgery
  • a comparison and analysis unit 352 that compares and analyzes the learning data in real time, and an event judgment that determines whether a sequence is missing or a change of the current surgical image data compared to the surgical learning data exceeds a threshold according to the comparison and analysis of the comparison and analysis unit 352. It can be seen that it is configured to include the unit 353.
  • the notification means 340 is configured to transmit a notification signal when an event occurs.
  • the controller 200 controls the driving of the holder robot 200 to capture an image of the location where the event occurred.
  • the surgical learning data can be learned the characteristics of the position and direction of the surgical tool according to the surgical sequence. Therefore, the comparison and determination unit 352 compares and analyzes the characteristics of the position and direction of the surgical instruments according to the surgical sequence of the surgical learning data and the surgical instruments in the current surgical image data, and the event determination unit 353 compares and analyzes the surgical instruments according to the sequence It is possible to determine an event about whether there is a change in the characteristics of location and direction that exceeds a threshold value.
  • the image processing device 130 includes the removal target tool learning DB 333 classified by removal target tool type by learning the sample removal target tool image 313, and the image processing device 130 removes the current surgical image data. Data can be exchanged with the controller 200 in the non-real-time control area by recognizing the target tool and figuring out the location and type.
  • the controller 200 controls driving of the holder robot 200 to capture an image of the position of the tool to be removed right before the surgery is completed.
  • the controller 200 includes a removal decision unit 356 that determines whether the tool to be removed is removed, and when it is determined that the tool to be removed is not removed right before the surgery is completed, the tool to be removed is removed.
  • the driving of the holder robot 100 is controlled to take an image.
  • the voice command processing device 140 is configured to receive voice data from the microphone 6 that receives voice data from the user, recognize voice control commands, and exchange data with the controller 200 through communication in a non-real-time control area. do.
  • the voice control input is a position command based on the display image
  • the controller 200 controls the driving of the holder robot to change the position of the image based on the voice control input.
  • voice command DB characteristics of each person are learned and classified according to voice control commands, and voice control commands are recognized from voice data to exchange data with a controller in a non-real-time control area.
  • the robot posture storage unit 357 is configured to command to store the posture of the robot during surgery, at a corresponding time point, or during a specific time range. That is, when a storage command is input by a voice command or the foot pedal 110, the robot posture at that time is stored. Therefore, the controller can control the driving of the holder robot to be converted to the stored robot posture according to the user's request.
  • the driving is controlled so that the momentary robot posture to be memorized is stored through a voice command or the foot pedal 110, and the memorized instantaneous image is later displayed according to the user's request. Also, when a command to move to the previous location is given, the current location is automatically saved and then moved according to the command to return to the previous location.
  • a situation in which movement to a target point cannot be achieved due to a changed environment (movement of organs) during surgery and the current position of a surgical tool may be identified and an object that becomes an obstacle during movement may be displayed on the screen.
  • the obstacle recognition unit may be configured to recognize a surgical tool DB and a tool that does not match the removal target tool DB as an obstacle and display it in the surgical image data when a tool that does not match the current surgical image data exists.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Manipulator (AREA)

Abstract

The present invention relates to an image information-based laparoscope robot artificial intelligence surgery guide system and, more specifically, to a system for monitoring a surgical process on the basis of image data captured by a laparoscope camera of a laparoscope camera holder robot, thereby guiding the surgery. The image information-based laparoscope robot artificial intelligence surgery guide system comprises: a data collecting unit for collecting surgery image data; a surgery image learning DB for learning the collected surgery image data, classifying same with regard to each surgery type and each surgeon, and storing surgery learning data; an image processing device for receiving current surgery image data captured by the camera and exchanging data with a controller for controlling the driving of the holder robot through communication in a non-real-time control area; a guide monitoring unit for generating surgery guide data in comparison with the surgery learning data and the current surgery image data captured by the camera; and a notification means for guiding the surgery guide data.

Description

영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템Image information-based laparoscopic robot artificial intelligence surgery guide system
본 발명은 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템 및 가이드방법에 관한 것이다. The present invention relates to an image information-based laparoscopic robot artificial intelligence surgical guide system and guide method.
의학적으로 수술이란 피부나 점막, 기타 조직을 의료 기계를 사용하여 자르거나 째거나 조작을 가하여 병을 고치는 것을 말한다. 특히, 수술부위의 피부를 절개하여 열고 그 내부에 있는 기관 등을 치료, 성형하거나 제거하 는 개복 수술 등은 출혈, 부작용, 환자의 고통, 흉터 등의 문제로 인하여 최근에는 로봇(robot)을 사용한 수술이 대안으로서 각광받고 있다.Medically, surgery refers to repairing a disease by cutting, cutting, or manipulating skin, mucous membranes, or other tissues using medical machines. In particular, open surgery, which incises and opens the skin at the surgical site and treats, molds, or removes internal organs, etc., has recently been using robots due to problems such as bleeding, side effects, patient pain, and scars. Surgery is emerging as an alternative.
수술 로봇은 외과의사에 의해 시행되던 수술 행위를 대신할 수 있는 기능을 가지는 로봇을 말한다. 이러한 수술 로봇은 사람에 비하여 정확하고 정밀한 동작을 할 수 있으며 원격 수술이 가능하다는 장점을 가진다. 현재 전세계적으로 개발되고 있는 수술 로봇은 뼈 수술 로봇, 복강경 수술 로봇, 정위 수술 로봇 등이 있다A surgical robot refers to a robot having a function that can substitute for a surgical operation performed by a surgeon. Compared to humans, these surgical robots have the advantage of being able to perform more accurate and precise movements and enabling remote surgery. Surgical robots currently being developed worldwide include bone surgery robots, laparoscopic surgery robots, and stereotactic surgery robots.
수술 로봇 장치는 일반적으로 마스터 콘솔과 슬레이브 로봇으로 구성된다. 오퍼레이터가 마스터 콘솔에 구비된 조종 레버(예를 들어 핸들)를 조작하면, 슬레이브 로봇의 로봇 암에 결합되거나 로봇 암이 파지하고 있는 인스트루먼트가 조작되어 수술이 수행된다.A surgical robot device is generally composed of a master console and a slave robot. When an operator manipulates a control lever (for example, a handle) provided in the master console, an instrument coupled to the robot arm of the slave robot or held by the robot arm is manipulated to perform surgery.
수술용 로봇에는 수술을 위한 조작을 위해 로봇 암을 구비하게 되며, 로봇 암의 선단부에는 인스트루먼트(instrument)가 장착된다. 이와 같이 로봇 암의 선단에 인스트루먼트를 장착하여 수술을 수행하게 되면 로봇 암 의 움직임에 따라 인스트루먼트도 같이 움직이며, 이는 환자의 피부를 일부 천공하고 여기에 인스트루먼트를 삽입하여 수술을 수행하는 과정에서 인체의 피부에 불필요한 손상을 입힐 우려가 있다. 또한, 수술 부위가 넓을 경우에는 인스트루먼트가 움직이는 경로만큼 피부를 절개하거나 각 수술 부위마다 피부를 천공해야 하는 등 로봇 수술의 잇점이 반감될 우려도 있다.The surgical robot is provided with a robot arm for manipulation for surgery, and an instrument is mounted on a front end of the robot arm. In this way, when an instrument is mounted on the front end of the robot arm to perform surgery, the instrument moves along with the movement of the robot arm, which means that the patient's skin is partially punctured and the instrument is inserted thereto to perform surgery. There is a risk of causing unnecessary damage to the skin. In addition, when the surgical area is wide, there is a concern that the advantage of robot surgery may be halved, such as incision of the skin as much as the path the instrument moves or perforation of the skin for each surgical area.
따라서, 로봇 암의 선단에 장착되는 인스트루먼트는 말단부의 소정 위치에 가상의 회전 중심점을 설정하고 이 점을 중심으로 인스트루먼트가 회전하도록 로봇 암을 제어하게 되는데, 이러한 가상의 중심점을 '원격중심' 또는 'RCM(remote center of motion)'이라 한다.Therefore, the instrument mounted on the front end of the robot arm sets a virtual rotation center point at a predetermined position at the distal end and controls the robot arm so that the instrument rotates around this point. This virtual center point is referred to as a 'remote center' or ' It is called RCM (remote center of motion).
따라서 본 발명은 상기와 같은 종래의 문제점을 해결하기 위하여 안출된 것으로서, 본 발명의 실시예에 따르면, 가이드가 되는 수술영상데이터와 현재 촬영영상데이터를 매핑하면서 수술 시퀀스에서 누락되거나 임계치 이상의 변화가 있는 경우 이를 알림신호로 송출하거나 수술 완료 직전 해당 이벤트가 존재하는 위치의 영상을 촬영하도록 제어할 수 있는, 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템을 제공하는데 그 목적이 있다. Therefore, the present invention has been devised to solve the above conventional problems, and according to an embodiment of the present invention, while mapping the surgical image data to be a guide and the current captured image data, there is a missing in the surgical sequence or a change of more than a threshold value. Its purpose is to provide an image information-based laparoscopic robot artificial intelligence surgery guide system that can be transmitted as a notification signal or controlled to capture an image of the location where the corresponding event exists immediately before surgery is completed.
본 발명의 실시예에 따르면, 수술도중 환부에 투입되는 도구(거즈, 매스 등)의 위치를 기억하여 상기 도구가 제거되었는지 확인하기 위해 자동으로 수술완료 직전 해당 위치를 촬영하게 하는 기술 또는 제어 여부를 판단하여 제거되지 않은 경우 수술완료 직전 해당 위치를 촬영하도록 제어할 수 있는, 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템을 제공하는데 그 목적이 있다. According to an embodiment of the present invention, it is possible to memorize the location of a tool (gauze, mass, etc.) inserted into the affected area during surgery and automatically photograph the location right before the surgery is completed to determine whether the tool has been removed or not. Its purpose is to provide an image information-based laparoscopic robot artificial intelligence surgery guide system that can be controlled to photograph the location immediately before surgery is completed if it is not removed by judgment.
본 발명의 실시예에 따르면, 음성명령이나 풋페달을 통해 기억하기 위한 순간의 로봇 자세를 저장하고, 후에 사용자의 요청에 따라 기억하고 있는 순간 영상 모습을 보여줄 수 있도록 이동하기 위한 움직임 계획을 수립할 수 있는, 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템을 제공하는데 그 목적이 있다. According to an embodiment of the present invention, it is possible to store a momentary robot posture to be memorized through a voice command or a foot pedal, and to establish a movement plan to move to show the memorized momentary image according to a user's request later. The purpose is to provide an image information-based laparoscopic robot artificial intelligence surgical guide system.
또한 본 발명의 실시예에 따르면, 수술 도중 변화된 환경(장기의 이동) 및 현재 수술 도구 위치 등으로 목표 지점으로 이동할 수 없는 상황 파악 및 화면에 이동시 장애물이 되는 물체 표시하고, 복강경 로봇의 기구학적 특성을 고려해서 가능한 최적의 화면을 표시할 수 있으며, 이전 위치로 이동 명령시 현재 위치를 자동으로 저장 후 이전 위치 복귀 명령에 따라 이동이 가능한, 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템을 제공하는데 그 목적이 있다. In addition, according to an embodiment of the present invention, a situation in which movement to a target point is impossible due to a changed environment (organ movement) during surgery and the current surgical tool position, etc., an object that becomes an obstacle during movement is displayed on the screen, and kinematic characteristics of the laparoscopic robot To provide an image information-based laparoscopic robot artificial intelligence surgery guide system that can display the optimal screen possible considering the There is a purpose.
한편, 본 발명에서 이루고자 하는 기술적 과제들은 이상에서 언급한 기술적 과제들로 제한되지 않으며, 언급하지 않은 또 다른 기술적 과제들은 아래의 기재로부터 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.On the other hand, the technical problems to be achieved in the present invention are not limited to the above-mentioned technical problems, and other technical problems that are not mentioned will become clear to those skilled in the art from the description below. You will be able to understand.
본 발명의 목적은, 복강경 카메라 홀더 로봇의 복강경 카메라에서 촬상된 영상데이터를 기반으로 수술 과정을 모니터링하여 수술을 가이드하는 시스템에 있어서, 수술영상데이터를 수집하는 데이터수집부; 수집된 수술영상데이터를 학습하여 수술종류별, 수술자별로 분류하여 수술학습데이터를 저장하는 수술영상학습 DB; 상기 카메라에서 촬영되는 현재 수술영상데이터를 입력받아 상기 홀더로봇의 구동을 제어하는 제어기와 비실시간 제어영역에서 통신을 통해 데이터를 교환하는 영상처리장치; 상기 수술학습데이터와, 상기 카메라에서 촬영되는 현재 수술영상데이터를 대비하여, 수술가이드데이터를 생성하는 가이드모니터링부; 및 상기 수술가이드데이터를 안내하는 알림수단;을 포함하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템으로서 달성될 수 있다. An object of the present invention is to provide a system for guiding surgery by monitoring a surgical procedure based on image data captured by a laparoscopic camera of a laparoscopic camera holder robot, comprising: a data collection unit for collecting surgical image data; a surgical image learning DB that stores the surgical learning data by learning the collected surgical image data and classifying them by surgery type and operator; an image processing device that receives current surgical image data captured by the camera and exchanges data through communication in a non-real-time control area with a controller that controls driving of the holder robot; a guide monitoring unit generating surgical guide data by comparing the surgical learning data with the current surgical image data captured by the camera; And notification means for guiding the surgical guide data; it can be achieved as an image information-based laparoscopic robot artificial intelligence surgery guide system comprising a.
그리고 상기 수술학습데이터는 수술종류 별, 수술자 별로 수술 시퀀스 특징이 학습되는 것을 특징으로 할 수 있다. In addition, the surgical learning data may be characterized in that surgical sequence characteristics are learned for each surgical type and each operator.
또한 상기 가이드모니터링부는, 상기 현재 수술영상데이터와, 상기 수술학습데이터를 실시간으로 비교분석하는 비교분석부; 및 상기 비교분석부의 비교분석에 따라, 시퀀스가 누락되거나, 상기 현재 수술영상데이터가 상기 수술학습데이터 대비 임계치 이상의 변화가 존재하는지에 대한 이벤트를 판단하는 이벤트 판단부;를 포함하고, 상기 알림수단은 상기 이벤트 발생시 알림신호를 송출하는 것을 특징으로 할 수 있다. In addition, the guide monitoring unit, a comparison analysis unit for comparing and analyzing the current surgical image data and the surgical learning data in real time; and an event determination unit for determining an event as to whether a sequence is missing or whether a change in the current surgical image data compared to the surgical learning data exceeds a threshold value according to the comparative analysis by the comparison and analysis unit, wherein the notification means comprises: It may be characterized in that a notification signal is transmitted when the event occurs.
그리고 상기 이벤트 발생시, 상기 제어기는 상기 이벤트가 발생된 위치의 영상을 촬영하도록 상기 홀더 로봇의 구동을 제어하는 것을 특징으로 할 수 있다. Further, when the event occurs, the controller may control driving of the holder robot to capture an image of a location where the event occurs.
또한 샘플 수술도구이미지를 학습하여 수술도구 종류별로 분류된 수술도구 학습DB를 포함하고, 상기 영상처리장치는 상기 현재 수술영상데이터 내의 수술도구를 인식하여 위치와 종류를 파악하여 상기 제어기와 비실시간 제어영역에서 데이터를 교환하며, 상기 수술학습데이터는 수술 시퀀스에 따른 수술도구의 위치, 방향의 특징이 학습되고, 상기 비교판단부는 수술학습데이터의 수술 시퀀스에 따른 수술도구의 위치, 방향의 특징과, 현재 수술영상데이터 내의 수술도구를 비교분석하고, 상기 이벤트판단부는 시퀀스에 따른 수술도구 위치, 방향의 특징이 임계치 이상의 변화가 존재하는지에 대한 이벤트를 판단하는 것을 특징으로 할 수 있다. In addition, it includes a surgical tool learning DB classified by surgical tool type by learning sample surgical tool images, and the image processing device recognizes the surgical tool in the current surgical image data and identifies the location and type to perform non-real-time control with the controller. data is exchanged in the area, and the surgical learning data learns the position and direction characteristics of surgical instruments according to the surgical sequence, and the comparison and determination unit position and direction characteristics of the surgical instruments according to the surgical sequence of the surgical learning data, The surgical tools in the current surgical image data may be compared and analyzed, and the event determination unit may determine an event as to whether there is a change of more than a threshold value in the position and direction characteristics of the surgical tool according to the sequence.
그리고 샘플 제거대상도구이미지를 학습하여 제거대상도구 종류별로 분류된 제거대상도구 학습DB를 포함하고, 상기 영상처리장치는 상기 현재 수술영상데이터 내의 제거대상도구를 인식하여 위치와 종류를 파악하여 상기 제어기와 비실시간 제어영역에서 데이터를 교환하며, 상기 제어기는, 상기 제거대상도구가 인식된 경우, 수술완료 직전 상기 제거대상도구의 위치를 촬상하도록 상기 홀더 로봇의 구동을 제어하는 것을 특징으로 할 수 있다. And includes a tool to be removed learning DB classified by type of tool to be removed by learning sample tool images to be removed, and the image processing device recognizes the tool to be removed in the current surgical image data to identify the location and type of the tool to be removed, and the controller and exchanges data in a non-real-time control area, and the controller, when the tool to be removed is recognized, controls driving of the holder robot to capture an image of the position of the tool to be removed right before surgery is completed. .
또한 상기 제거대상도구가 인식된 경우, 상기 제거대상도구의 제거여부를 판단하는 제거여부판단부를 더 포함하고, 상기 제어기는 수술완료 직전 상기 제거대상도구가 제거되지 않았다고 판단된 경우 상기 제거대상도구를 촬상하도록 상기 홀더 로봇의 구동을 제어하는 것을 특징으로 할 수 있다. The controller may further include a removal decision unit for determining whether the tool to be removed is removed when the tool to be removed is recognized, and the controller controls the tool to be removed when it is determined that the tool to be removed is not removed immediately before completion of the operation. It may be characterized in that driving of the holder robot is controlled to capture an image.
그리고 사용자로부터 음성데이터를 입력받는 마이크와, 상기 음성데이터를 입력받아 음성제어명령를 인식하고, 상기 제어기와 비실시간 제어영역에서 통신을 통해 데이터를 교환하는 음성명령처리장치;를 더 포함하고, 상기 음성제어입력은 디스플레이 영상을 기준으로한 위치명령이고, 상기 제어기는 상기 음성제어입력을 기반으로 상기 영상의 위치가 변경되도록 상기 홀더 로봇의 구동을 제어하는 것을 특징으로 할 수 있다. And a microphone for receiving voice data from a user, and a voice command processing device for receiving the voice data, recognizing a voice control command, and exchanging data with the controller through communication in a non-real-time control area. The control input may be a position command based on a display image, and the controller may control driving of the holder robot so that the position of the image is changed based on the voice control input.
또한 상기 음성명령처리장치는, 사람 개인별로 특성을 학습하고 음성제어명령별로 분류된 음성명령 DB를 포함하고, 상기 음성데이터에서 음성제어명령을 인식하여 상기 제어기와 비실시간 제어영역에서 데이터를 교환하는 것을 특징으로 할 수 있다. In addition, the voice command processing device includes a voice command DB that learns characteristics of each person and is classified for each voice control command, recognizes a voice control command from the voice data, and exchanges data with the controller in a non-real-time control area. that can be characterized.
그리고 수술중, 해당 시점 또는 특정 시간 범위 동안 로봇의 자세를 저장하도록 명령하는 로봇자세 저장부;를 더 포함하고, 상기 제어기는 사용자의 요청에 따라 상기 저장된 로봇 자세로 전환되도록 상기 홀더 로봇의 구동을 제어하는 것을 특징으로 할 수 있다. And a robot posture storage unit for commanding to store the posture of the robot at a specific point in time or during a specific time range during surgery, and the controller drives the holder robot to switch to the stored robot posture according to a user's request. It can be characterized as controlling.
또한 상기 수술도구 DB 및 상기 제거대상도구 DB에 매칭되지 않은 도구가 현재 수술영상데이터에 존재하는 경우 장애물로 인식하여 상기 수술영상데이터 내에 표시하는 장애물 인식부를 더 포함하는 것을 특징으로 할 수 있다. In addition, when a tool that does not match the surgical tool DB and the removal target tool DB exists in the current surgical image data, it may be characterized by further comprising an obstacle recognizing unit that recognizes it as an obstacle and displays it in the surgical image data.
본 발명의 실시예에 따른 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템에 따르면, 가이드가 되는 수술영상데이터와 현재 촬영영상데이터를 매핑하면서 수술 시퀀스에서 누락되거나 임계치 이상의 변화가 있는 경우 이를 알림신호로 송출하거나 수술 완료 직전 해당 이벤트가 존재하는 위치의 영상을 촬영하도록 제어할 수 있는 효과를 갖는다. According to the image information-based laparoscopic robot artificial intelligence surgery guide system according to an embodiment of the present invention, while mapping the surgical image data that serves as a guide and the current captured image data, if there is an omission in the surgical sequence or a change greater than the threshold value, it is sent as a notification signal. Alternatively, it has an effect of being able to control to capture an image of a location where a corresponding event exists right before surgery is completed.
본 발명의 실시예에 따른 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템에 따르면, 수술도중 환부에 투입되는 도구(거즈, 매스 등)의 위치를 기억하여 상기 도구가 제거되었는지 확인하기 위해 자동으로 수술완료 직전 해당 위치를 촬영하게 하는 기술 또는 제어 여부를 판단하여 제거되지 않은 경우 수술완료 직전 해당 위치를 촬영하도록 제어할 수 있는 효과를 갖는다. According to the image information-based laparoscopic robot artificial intelligence surgery guide system according to an embodiment of the present invention, the surgery is automatically completed to determine whether the tool is removed by memorizing the position of a tool (gauze, mass, etc.) inserted into the affected area during surgery. If it is not removed by determining whether or not the technology or control to shoot the corresponding position just before the surgery is completed, it has the effect of being able to control the photographing of the corresponding position immediately before the surgery is completed.
본 발명의 실시예에 따른 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템에 따르면, 음성명령이나 풋페달을 통해 기억하기 위한 순간의 로봇 자세를 저장하고, 후에 사용자의 요청에 따라 기억하고 있는 순간 영상 모습을 보여줄 수 있도록 이동하기 위한 움직임 계획을 수립할 수 있는 효과를 갖는다. According to the image information-based laparoscopic robot artificial intelligence surgery guide system according to an embodiment of the present invention, the momentary robot posture to be memorized through a voice command or foot pedal is stored, and the instantaneous image that is remembered according to the user's request later It has the effect of establishing a movement plan to move to show
또한 본 발명의 실시예에 따른 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템에 따르면, 수술 도중 변화된 환경(장기의 이동) 및 현재 수술 도구 위치 등으로 목표 지점으로 이동할 수 없는 상황 파악 및 화면에 이동시 장애물이 되는 물체 표시하고, 복강경 로봇의 기구학적 특성을 고려해서 가능한 최적의 화면을 표시할 수 있으며, 이전 위치로 이동 명령시 현재 위치를 자동으로 저장 후 이전 위치 복귀 명령에 따라 이동이 가능한 효과를 갖는다. In addition, according to the image information-based laparoscopic robot artificial intelligence surgery guide system according to an embodiment of the present invention, it is possible to grasp the situation in which movement to the target point cannot be performed due to the changed environment (organ movement) and the current surgical tool position during surgery, and obstacles when moving to the screen. This object can be displayed, the optimal screen can be displayed considering the kinematic characteristics of the laparoscopic robot, and the current position is automatically saved when a move command is given to the previous position, and then the move can be performed according to the return command to the previous position. .
한편, 본 발명에서 얻을 수 있는 효과는 이상에서 언급한 효과들로 제한되지 않으며, 언급하지 않은 또 다른 효과들은 아래의 기재로부터 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.On the other hand, the effects obtainable in the present invention are not limited to the effects mentioned above, and other effects not mentioned will be clearly understood by those skilled in the art from the description below. You will be able to.
본 명세서에 첨부되는 다음의 도면들은 본 발명의 바람직한 실시예를 예시하는 것이며, 발명의 상세한 설명과 함께 본 발명의 기술적 사상을 더욱 이해시키는 역할을 하는 것이므로, 본 발명은 그러한 도면에 기재된 사항에만 한정되어 해석 되어서는 아니 된다.The following drawings attached to this specification illustrate preferred embodiments of the present invention, and together with the detailed description of the invention serve to further understand the technical idea of the present invention, the present invention is limited only to those described in the drawings. and should not be interpreted.
도 1은 본 발명의 실시예에 따른 복강경 장착 어뎁터와 RCM구조를 가지는 복강경 홀더 로봇 시스템의 구성도, 1 is a configuration diagram of a laparoscopic holder robot system having a laparoscopic mounting adapter and an RCM structure according to an embodiment of the present invention;
도 2는 본 발명의 실시예에 따른 RCM구조를 갖는 구동 매커니즘을 나타낸 모식도, 2 is a schematic diagram showing a driving mechanism having an RCM structure according to an embodiment of the present invention;
도 3은 본 발명의 실시예에 따른 복강경 장착 어뎁터와 RCM구조를 가지는 복강경 홀더 로봇의 측면도, 3 is a side view of a laparoscopic holder robot having a laparoscopic mounting adapter and an RCM structure according to an embodiment of the present invention;
도 4는 본 발명의 실시예에 따른 복강경에 부착된 복강경 어뎁터 장치의 측단면도, Figure 4 is a side cross-sectional view of the laparoscopic adapter device attached to the laparoscope according to an embodiment of the present invention;
도 5는 본 발명의 실시예에 따른 탈부착 유닛의 측단면도, 5 is a side cross-sectional view of a detachable unit according to an embodiment of the present invention;
도 6은 본 발명의 실시예에 따른 탈부착 유닛의 정면도, 6 is a front view of a detachable unit according to an embodiment of the present invention;
도 7은 본 발명의 실시예에 따른 제어기의 제어흐름을 나타낸 블록도, 7 is a block diagram showing a control flow of a controller according to an embodiment of the present invention;
도 8은 본 발명의 실시예에 따른 복강경 카메라 홀더 로봇 제어시스템의 그성도, 8 is a diagram of a laparoscopic camera holder robot control system according to an embodiment of the present invention;
도 9는 본 발명의 실시예에 따른 외부인터페이스에 의해 통신연결되는 외부조정장치와 제어기의 블록도, 9 is a block diagram of an external control device and a controller that are communicatively connected by an external interface according to an embodiment of the present invention;
도 10은 본 발명의 실시예에 따른 4가지 모드의 복강경 카메라 홀더 로봇 제어시스템의 블록도, 10 is a block diagram of a four-mode laparoscopic camera holder robot control system according to an embodiment of the present invention;
도 11은 본 발명의 실시예에 따른 복강경 카메라 홀더 로봇 제어방법의 흐름도, 11 is a flowchart of a method for controlling a laparoscopic camera holder robot according to an embodiment of the present invention;
도 12는 본 발명의 실시예에 따른 음성명령모드에서 그리드 명령입력시 화면과, 4번을 음성명령한 경우 화면의 예, 12 is an example of a screen when a grid command is input in a voice command mode according to an embodiment of the present invention and a screen when a voice command is number 4;
도 13은 본 발명의 실시예에 따른 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템의 블록도, 13 is a block diagram of a laparoscopic robot artificial intelligence surgery guide system based on image information according to an embodiment of the present invention;
도 14는 본 발명의 실시예에 따른 데이터수집부의 블록도, 14 is a block diagram of a data collection unit according to an embodiment of the present invention;
도 15는 본 발명의 실시예에 따른 학습 DB의 블록도, 15 is a block diagram of a learning DB according to an embodiment of the present invention;
도 16은 본 발명의 실시예에 따른 가이드모니터링부의 블록도를 도시한 것이다. 16 shows a block diagram of a guide monitoring unit according to an embodiment of the present invention.
이하에서는 본 발명의 실시예에 따른 복강경 장착 어뎁터와 RCM구조를 가지는 복강경 홀더 로봇 시스템에 대해 설명하도록 한다. 먼저 도 1은 본 발명의 실시예에 따른 복강경 장착 어뎁터와 RCM구조를 가지는 복강경 홀더 로봇 시스템의 구성도를 도시한 것이다. Hereinafter, a laparoscopic holder robot system having a laparoscopic mounting adapter and an RCM structure according to an embodiment of the present invention will be described. First, FIG. 1 shows a configuration diagram of a laparoscopic holder robot system having a laparoscopic mounting adapter and an RCM structure according to an embodiment of the present invention.
첫번째, 본 발명의 실시예에 에 따른 복강경 장착 어뎁터와 RCM구조를 가지는 복강경 홀더 로봇의 구성, 기능을 구동 매커니즘을 중심으로 설명하고, 두번째, 이러한 복강경 홀더 로봇에 대한 제어시스템, 제어방법에 대해 설명한 후, 세번째, 영상정보 기반 복강경 수술 과정 모니터링 방법, 시스템에 대해 설명하도록 한다. First, the configuration and function of the laparoscopic holder robot having the laparoscopic mounting adapter and the RCM structure according to an embodiment of the present invention are described focusing on the driving mechanism, and second, the control system and control method for the laparoscopic holder robot are described. Then, thirdly, the method and system for monitoring the laparoscopic surgery process based on image information will be described.
도 2는 본 발명의 실시예에 따른 RCM구조를 갖는 구동 매커니즘을 나타낸 모식도를 도시한 것이다. 그리고 도 3은 본 발명의 실시예에 따른 복강경 장착 어뎁터와 RCM구조를 가지는 복강경 홀더 로봇의 측면도를 도시한 것이다. 2 is a schematic diagram showing a driving mechanism having an RCM structure according to an embodiment of the present invention. And Figure 3 shows a side view of the laparoscopic holder robot having a laparoscopic mounting adapter and RCM structure according to an embodiment of the present invention.
본 발명의 실시예에 따른 복강경 장착 어뎁터와 RCM구조를 가지는 복강경 홀더 로봇(100)은 기본적으로, 일측 단부에 카메라(2)가 구비되며 타측 단부에 이미지센서(3)가 구비된 복강경(1)을 RCM(remote center of motion) 포인트(4)를 중심으로 회전하도록 작동도니다. Laparoscope holder robot 100 having a laparoscope mounting adapter and an RCM structure according to an embodiment of the present invention basically has a camera 2 at one end and a laparoscope 1 equipped with an image sensor 3 at the other end. rotates around a remote center of motion (RCM) point (4).
본 발명의 실시예에 따른 복강경 장착 어뎁터와 RCM구조를 가지는 복강경 홀더 로봇(100)은 전체적으로, 몸체(5), RCM 구조(10), 제1회전구동부(20), 제2회전구동부(30), 선형이동장치(40), 복강경 축방향 회전장치(60)를 가지는 복강경 어뎁터 장치(50) 등을 포함하여 구성된다. The laparoscopic holder robot 100 having a laparoscopic mounting adapter and an RCM structure according to an embodiment of the present invention generally includes a body 5, an RCM structure 10, a first rotation drive unit 20, and a second rotation drive unit 30. , Linear movement device 40, the laparoscopic adapter device 50 having a laparoscopic axial rotation device 60, and the like.
본 발명의 실시예에 따른 복강경 장착 어뎁터와 RCM구조를 가지는 복강경 홀더 로봇(100)은 4자유도를 가지며, 복강경 어뎁터 장치(50)에 의해 복강경(1)을 홀더 로봇(100) 측에 탈착 가능하게 구성된다. The laparoscopic holder robot 100 having the laparoscopic mounting adapter and the RCM structure according to an embodiment of the present invention has four degrees of freedom and enables the laparoscope 1 to be detached from the holder robot 100 by the laparoscopic adapter device 50. It consists of
후에 설명되는 바와 같이, 제1회전구동부(20)에 의해 복강경(1)은, 복강경 상의 RCM 포인트(4)를 기준으로 상하 틸팅되고, 제2회전구동부(30)에 의해 RCM 포인트(4)과 제1회전조인트(11)를 연결하는 가상축을 기준으로 회전구동되며, 선형이동장치(40)를 통해 복강경(1)을 길이방향으로 이동시킬 수 있으며, 복강경 축방향 회전장치(60)에 의해 복강경을 길이방향 축기준으로 회전시키는 4자유도를 갖는다. As will be described later, the laparoscope 1 is tilted up and down based on the RCM point 4 on the laparoscope by the first rotation drive unit 20, and the RCM point 4 and the RCM point 4 by the second rotation drive unit 30. It is rotationally driven based on the imaginary axis connecting the first rotation joint 11, and the laparoscope 1 can be moved in the longitudinal direction through the linear moving device 40, and the laparoscope 1 can be moved in the longitudinal direction by the laparoscope axial rotation device 60. has four degrees of freedom to rotate about its longitudinal axis.
RCM 구조(10)는 후방 단부가 몸체의 제1회전조인트(11) 상에서 결합되고, 전방단부가 제2회전조인트(16) 상에서 복강경(1) 측과 결합된다. 제1회전구동부(20)는 몸체(5) 내에 구비되어 RCM구조(10)를 제1회전조인트(11) 기준으로 회전구동하여, 복강경(1)이 RCM 포인트(4)를 중심으로 회전되도록 한다. The RCM structure 10 has a rear end coupled on the first rotary joint 11 of the body, and a front end coupled to the laparoscope 1 side on the second rotary joint 16. The first rotation driving unit 20 is provided in the body 5 and drives the RCM structure 10 to rotate with respect to the first rotation joint 11, so that the laparoscope 1 rotates around the RCM point 4 .
보다 구체적으로 도 2에 도시된 바와 같이, RCM구조(10)는 제1링크부(12)와 제2링크부(14)가 결합된 구조를 가짐을 알 수 있다. 그리고 제1회전조인트(11)는 제1-1회전조인트(11-1), 제1-1회전조인트(11-1)와 하방측으로 특정간격 이격되어 위치된 제1-2회전조인트(11-2)를 포함하며, 제2회전조인트(16)는 제2-1회전조인트(16-1)와 제2-2회전조인트(16-2)를 포함하도록 구성된다. More specifically, as shown in FIG. 2 , it can be seen that the RCM structure 10 has a structure in which the first link unit 12 and the second link unit 14 are combined. In addition, the first rotation joint 11 is a 1-1 rotation joint 11-1, a 1-2 rotation joint 11-1 spaced apart from the 1-1 rotation joint 11-1 downward by a specific interval. 2), and the second rotation joint 16 is configured to include a 2-1 rotation joint 16-1 and a 2-2 rotation joint 16-2.
제1링크부(12)는, 일측끝단이 제1-1회전조인트(11-1)와 연결되는 제1-1링크(12-1)와, 일측끝단이 제1-1링크(12-1)의 타측끝단과 제1힌지(13)로 연결되며 타측끝단은 제2-1회전조인트(16-1)를 통해 선형이동장치(40)와 연결되는 제1-2링크(12-1)를 포함하여 구성된다. The first link unit 12 includes a 1-1 link 12-1 having one end connected to the 1-1 rotation joint 11-1, and a 1-1 link 12-1 having one end connected to the 1-1 rotation joint 11-1. ) is connected to the other end of the first hinge 13, and the other end connects the 1-2 link 12-1 connected to the linear movement device 40 through the 2-1 rotation joint 16-1. consists of including
또한 제2링크부(14)는, 일측끝단이 제1-2회전조인트(11-2)와 연결되는 제2-1링크(14-1)와, 일측끝단이 제2-1링크(14-1)의 타측끝단과 제2힌지(15)로 연결되며 타측끝단은 제2-2회전조인트(16-2)를 통해 선형이동장치(40)와 연결되는 제2-2링크(140-2)를 갖는다. In addition, the second link unit 14 includes a 2-1 link 14-1 having one end connected to the 1-2 rotation joint 11-2, and a 2-1 link 14-1 having one end connected to the 1-2 rotation joint 11-2. The 2-2 link 140-2 connected to the other end of 1) by the second hinge 15 and the other end connected to the linear movement device 40 through the 2-2 rotation joint 16-2 have
그리고 제1링크부(12)와 제2링크부(14)는 제1-2링크(12-2)와 제2-1링크(14-1)가 교차되는 지점에서 힌지결합된다. The first link unit 12 and the second link unit 14 are hinged at a point where the 1-2 link 12-2 and the 2-1 link 14-1 intersect.
또한 제2회전구동부(30)는 몸체(5) 일측에 설치되어 RCM 포인트(4)와 제1회전조인트(11)를 연결하는 가상선을 기준으로 복강경(1)이 회전되도록 구동하게 된다. In addition, the second rotation drive unit 30 is installed on one side of the body 5 and drives the laparoscope 1 to rotate based on a virtual line connecting the RCM point 4 and the first rotation joint 11.
그리고 RCM 포인트(4)는 도 2 및 도 3에 도시된 바와 같이, 제1-1회전포인트(11-1)와 제1-2회전포인트(11-2)를 연결하는 가상선과 복강경(1)이 교차되는 지점에 위치됨을 알 수 있다. 따라서 RCM 구조(10)를 통해 제1회전구동부의 구동에 의해 복강경(1)은 RCM 포인트(4)를 기준으로 회전될 수 있게 된다. And, as shown in FIGS. 2 and 3, the RCM point 4 is a virtual line connecting the 1-1 rotation point 11-1 and the 1-2 rotation point 11-2 and the laparoscope 1 It can be seen that it is located at the point where it intersects. Therefore, the laparoscope 1 can be rotated with respect to the RCM point 4 by driving the first rotary driving unit through the RCM structure 10 .
선형이동장치(40)는 제2회전조인트(16) 상에 연결되어 복강경(1)을 길이방향으로 이동시키도록 구성된다. 이러한 선형이동장치(40)의 구체적인 구성, 수단, 형태는 복강경(1)을 복강경(1)의 길이방향을 따라 이동시킬 수 있는 것이라면 제한되지 않는다. The linear movement device 40 is connected to the second rotation joint 16 and is configured to move the laparoscope 1 in the longitudinal direction. The specific configuration, means, and form of the linear movement device 40 are not limited as long as it can move the laparoscope 1 along the longitudinal direction of the laparoscope 1.
또한 복강경 어뎁터 장치는, 선형이동장치와 복강경을 탈부착시키도록 구성된다. In addition, the laparoscopic adapter device is configured to attach and detach the linear movement device and the laparoscope.
또한 도 4는 본 발명의 실시예에 따른 복강경에 부착된 복강경 어뎁터 장치의 측단면도를 도시한 것이다. 그리고 도 5는 본 발명의 실시예에 따른 탈부착 유닛의 측단면도, 도 6는 본 발명의 실시예에 따른 탈부착 유닛의 정면도를 도시한 것이다. 그리고 도 7은 본 발명의 실시예에 따른 제어기의 제어흐름을 나타낸 블록도를 도시한 것이다. In addition, Figure 4 shows a side cross-sectional view of the laparoscopic adapter device attached to the laparoscope according to an embodiment of the present invention. 5 is a cross-sectional side view of a detachable unit according to an embodiment of the present invention, and FIG. 6 is a front view of the detachable unit according to an embodiment of the present invention. And Figure 7 shows a block diagram showing the control flow of the controller according to an embodiment of the present invention.
본 발명의 실시예에 따른 복강경 어뎁터 장치(50)는, 상부 일측에 구비되어 선형이동장치(40)와 탈부착되도록 구성된 체결수단과, 복강경(1)이 탈부착되도록 구성되는 탈부착유닛(70)을 포함한다. 이러한 탈부착유닛(70)은 복강경의 직경 사이즈에 따라 교체가 가능하도록 구성된다.The laparoscopic adapter device 50 according to an embodiment of the present invention includes a fastening means provided on one side of the upper portion and configured to be detachable from the linear movement device 40, and a detachable unit 70 configured to attach and detach the laparoscope 1. do. This detachable unit 70 is configured to be replaceable according to the size of the diameter of the laparoscope.
또한 복강경 어뎁터 장치(50)는, 선형이동장치(40)와 탈부착유닛(70) 사이에 복강경 축방향 회전장치(60)가 탈부착 가능하게 설치될 수 있다. 이러한 복강경 축방향 회전장치(60)를 통해 복강경을 길이방향 축 기준으로 회전시킬 수 있게 된다.In addition, in the laparoscopic adapter device 50, the laparoscopic axial rotation device 60 may be detachably installed between the linear movement device 40 and the detachable unit 70. Through the laparoscope axial rotation device 60, the laparoscope can be rotated based on the longitudinal axis.
즉 복 발명의 실시예에 따르면, 복강경(1) 지름이 다른 문제를 해결하고 로봇(100)에 모듈처럼 장착할 수 있게 된다. 복강경(1) 지름 크기가 다양하므로 2-3개로 구분해서 장착할 수 있으며, 어텝터(50) 외부 크기는 일정하게 해서 로봇(100)과 결합될 수 있도록 한다. 축방향 회전이 필요없는 경우도 있으므로 모듈형으로 복강경 축방향 회전장치가 장착이 가능하도록 구성된다. 경사각 있는 복강경(1)을 장착할 수 있으므로 중심축을 맞출 수 있도록 구성되며, 모터 드라이브(62)와 전기 모터(61)를 하나의 모듈로 함께 설치하게 된다. That is, according to an embodiment of the present invention, it is possible to solve the problem of having a different diameter of the laparoscope 1 and mount it on the robot 100 like a module. Since the diameter of the laparoscope 1 varies, it can be divided into 2-3 pieces and mounted, and the external size of the adapter 50 is kept constant so that it can be combined with the robot 100. Since there are cases where axial rotation is not required, it is configured to be equipped with a modular laparoscopic axial rotation device. Since the laparoscope 1 having an inclined angle can be mounted, it is configured to align the central axis, and the motor drive 62 and the electric motor 61 are installed together as one module.
본 발명의 실시예에 따른 탈부착유닛(70)은, 도 5a 및 도 5b에 도시된 바와 같이, 원통 내면을 갖고 내부로 상기 복강경(1)이 삽입되며 길이방향으로 절개부가 형성된 장착부(71)와, 조작에 의해 장착부(71)를 조여 복강경(1)을 고정시키는 캠 크램핑부재(72)를 포함하여 구성될 수 있다. As shown in FIGS. 5A and 5B, the attachment/detachment unit 70 according to an embodiment of the present invention has a cylindrical inner surface, into which the laparoscope 1 is inserted, and a mounting portion 71 having an incision in the longitudinal direction and , It may be configured to include a cam clamping member 72 for fixing the laparoscope 1 by tightening the mounting portion 71 by manipulation.
그리고 제어기200)는 제1회전구동부(20), 제2회전구동부(30), 선형이동장치(40), 및 복강경 축방향 회전장치(60)의 구동을 제어하여, 복강경(10)의 끝단위치를 조절하여 후에 설명되는 바와 같이, 복강경 카메라(2)의 촬상위치를 조정하게 된다. And the controller 200 controls the driving of the first rotary drive unit 20, the second rotation drive unit 30, the linear movement unit 40, and the axial rotation unit 60 of the laparoscope to position the end of the laparoscope 10. As will be described later by adjusting, the imaging position of the laparoscopic camera 2 is adjusted.
또한 수술 도중 복강경 렌즈 세척을 위해 자주 빼고 넣는 작업을 한다. 이를 위해 쉽게 장착할 수 있어야 한다. 어뎁터 장치(50)는 복강경 축방향 회전장치(60)와 결합되어 있으며 복강경(1) 자체만 탈착할 수 있도록 구성된다. In addition, during surgery, it is frequently removed and inserted to clean the laparoscopic lens. It should be easy to mount for this. The adapter device 50 is coupled to the laparoscope axial rotation device 60 and is configured to detach only the laparoscope 1 itself.
본 발명의 실시예에 따른 위치 인식부는, 각도기준과 길이기준에 기반한 기준위치로부터 상기 복강경의 길이방향 이동위치, 복강경 길이방향 축 기준 회전각도 위치를 인식하게 된다. The position recognition unit according to an embodiment of the present invention recognizes a longitudinal movement position of the laparoscope and a rotation angle position based on a longitudinal axis of the laparoscope from a reference position based on an angle criterion and a length criterion.
따라서, 시술중, 시술을 중단하고 복강경(1)을 빼내서 필요 작업 후, 사용자가 복강경(1)을 기준위치에 위치시키는 경우, 제어기(200)는 시술 중단 지점에서의 복강경의 위치를 기준위치로부터 산출한 후, 선형이동장치(40), 및 복강경 축방향 회전장치(60)의 구동을 제어하여, 복강경(1)을 시술 중단 지점에서의 복강경 위치로 이동시키도록 한다. Therefore, during the procedure, when the user places the laparoscope 1 at the reference position after the necessary operation by stopping the procedure and taking out the laparoscope 1, the controller 200 moves the position of the laparoscope at the point where the procedure is stopped from the reference position. After the calculation, the driving of the linear moving device 40 and the laparoscope axial rotation device 60 are controlled to move the laparoscope 1 to the laparoscope position at the procedure stop point.
예를 들어, 복강경(1)을 빼내서 필요한 작업 후 재설치 했을때 복강경(1)의 길이 방향위치가 변화되지 않도록 해야 한다. 이를 위해 복강경(1)에 기 장착된 light source 기구부와 어뎁터 장치(50)가 일치하도록 기준선을 만들어줘서 각도 부분을 맞추고 복강경(1) 길이는 복강경에 있는 단차 부분과 일치하도록 해서 길이를 맞춘다. 즉 복강경 홀더로봇(100)은 복강경(1)과 결합 후 로봇 시스템에 장착된 후에 켈리브레이션 과정을 거쳐서 RCM 포인트(4)와 상태 위치를 확정한다. 그 이후 탈착이 됐을 때 이러한 켈리브레이션 과정이 필요 없도록 한다.For example, when the laparoscope 1 is removed and reinstalled after necessary work, the longitudinal position of the laparoscope 1 should not be changed. To this end, a reference line is made so that the light source mechanical part and the adapter device 50 already mounted on the laparoscope (1) match, and the angle part is matched, and the length of the laparoscope (1) is matched with the step part in the laparoscope to match the length. That is, after the laparoscope holder robot 100 is coupled to the laparoscope 1 and mounted in the robot system, the RCM point 4 and the state position are determined through a calibration process. After that, when it is detached, this calibration process is not needed.
이하에서는 본 발명의 실시예에 따른 복강경 카메라 홀더 로봇 제어시스템의 구성, 기능, 제어방법에 대해 설명하도록 한다. Hereinafter, the configuration, function, and control method of the laparoscopic camera holder robot control system according to an embodiment of the present invention will be described.
먼저 도 8은 본 발명의 실시예에 따른 복강경 카메라 홀더 로봇 제어시스템의 구성도를 도시한 것이다. 그리고 도 9는 본 발명의 실시예에 따른 외부인터페이스에 의해 통신연결되는 외부조정장치와 제어기의 블록도를 도시한 것이다. 또한 도 10은 본 발명의 실시예에 따른 4가지 모드의 복강경 카메라 홀더 로봇 제어시스템의 블록도를 도시한 것이다. First, FIG. 8 shows a block diagram of a laparoscopic camera holder robot control system according to an embodiment of the present invention. 9 is a block diagram of an external control device and a controller that are communicatively connected by an external interface according to an embodiment of the present invention. 10 also shows a block diagram of a four-mode laparoscopic camera holder robot control system according to an embodiment of the present invention.
본 발명의 실시예에 따른 복강경 카메라 홀더 로봇 제어시스템은, 앞서 언급한 복강경 카메라 홀더로봇(100)의 구동을 제어하기 위한 시스템이다. The laparoscopic camera holder robot control system according to an embodiment of the present invention is a system for controlling the driving of the aforementioned laparoscopic camera holder robot 100 .
제어기(20)는 앞서 언급한 바와 같이, 기본적으로 제어명령 신호를 기반으로 홀더로봇(10)의 구동부, 즉 제1회전구동부(20), 제2회전구동부(30), 선형이동장치(40), 복강경 축방향 회전장치(60)의 구동을 제어한다. As mentioned above, the controller 20 basically drives the holder robot 10 based on the control command signal, that is, the first rotation drive unit 20, the second rotation drive unit 30, and the linear movement device 40. , Controls the driving of the laparoscopic axial rotation device 60.
또한 제어기(200)는 실시간 제어영역과 비실시간 제어영역으로 구분되어 내부 통신을 통해 상호 데이터를 교환하도록 구성된다. 제어기(200)의 실시간 제어기(202)는 실시간 영역에서 자세측정부와, 최우선 제어입력수단으로부터 제어신호를 전송받게 된다. In addition, the controller 200 is divided into a real-time control area and a non-real-time control area, and is configured to mutually exchange data through internal communication. The real-time controller 202 of the controller 200 receives a control signal from the attitude measurement unit and the first control input means in the real-time domain.
최우선 제어입력수단은 실시간 제어영역에서 사용자에 의해 제어기(200)에 최우선 제어명령 신호를 입력하기 위한 수단으로 본 발명의 실시예에서는 풋페달(110)로 구성될 수 있다. 사용자는 풋페달(110)이 조작을 통해 제어기(200)로 제어신호를 입력하게 된다. The top priority control input means is a means for inputting a top priority control command signal to the controller 200 by a user in a real-time control area, and may be composed of a foot pedal 110 in an embodiment of the present invention. The user inputs a control signal to the controller 200 through manipulation of the foot pedal 110 .
자세측정부는 실시간 제어영역에서 홀더로봇(100)의 자세데이터를 측정하여 제어기(200)로 전송하도록 구성되며, 본 발명의 실시예에서 자세측정부는 IMU센서(120)로 구성될 수 있다. The posture measurement unit is configured to measure the posture data of the holder robot 100 in the real-time control area and transmits the data to the controller 200. In an embodiment of the present invention, the posture measurement unit may be configured with the IMU sensor 120.
디스플레이부(150)는 복강경 카메라(2)에서 촬상된 영상데이터를 실시간으로 디스플레이하도록 구성된다. The display unit 150 is configured to display image data captured by the laparoscopic camera 2 in real time.
외부조정장치(210)는 제어기(200)에 구비된 외부인터페이스(201)와 통신 연결되어, 외부조정입력에 의해 복강경(1) 끝단 위치를 조정하여 디스플레이부(150)에서 디스플레이되는 영상의 위치를 조정하도록 구성된다. The external control device 210 is communicatively connected to the external interface 201 provided in the controller 200, and adjusts the position of the end of the laparoscope 1 by means of an external adjustment input to adjust the position of the image displayed on the display unit 150. configured to adjust.
외부 인터페이스(201)는 비실시간 제어영역에서 동작하며, 상기 제어기의 실시간 제어영역과 내부통신을 통해 데이터를 교환한다. The external interface 201 operates in the non-real-time control area and exchanges data with the real-time control area of the controller through internal communication.
외부조정입력은 디스플레이 영상을 기준으로한 위치명령이고, 제어기(200)는 외부조정입력을 기반으로 영상의 위치가 변경되도록 홀더 로봇(100)의 구동을 제어하게 된다. The external adjustment input is a position command based on the displayed image, and the controller 200 controls the driving of the holder robot 100 so that the position of the image is changed based on the external adjustment input.
즉, 외부조종장치(210)는 수술자(사용자)가 수술도중 카메라 홀더 로봇(100)을 움직이기 위한 장치로 복강경 영상이 보여주는 위치를 조종해서 간접적으로 카메라 홀더 로봇(100)에 부착된 복강경(1) 끝단 위치를 조종해서 보여지는 영상을 조종하게 된다. 외부조종장치(210)와 연결은 카메라 홀더 로봇 제어기(200)에 있는 “외부 인터페이스(201)” 모듈을 통해 카메라 홀더 로봇 제어기(200)와 연결되며, 외부 인터페이스(201) 모듈은 여러가지 통신 방법을 지원한다(ADS, TCP/IP, Serial 등). 외부 인터페이스(201) 모듈은 비실시간 영역에서 동작하며 카메라 홀더 로봇 제어기(200)의 실시간 제어 영역과 내부 통신을 통해 데이터 교환한다. That is, the external control device 210 is a device for the operator (user) to move the camera holder robot 100 during surgery, and the laparoscope 1 indirectly attached to the camera holder robot 100 by controlling the position shown in the laparoscopic image. ) to control the end position to control the displayed image. The connection with the external control device 210 is connected to the camera holder robot controller 200 through the "external interface 201" module in the camera holder robot controller 200, and the external interface 201 module uses various communication methods. Support (ADS, TCP/IP, Serial, etc.). The external interface 201 module operates in a non-real-time area and exchanges data with the real-time control area of the camera holder robot controller 200 through internal communication.
또한 영상처리장치(130)는 카메라(2)에서 촬상된 영상데이터를 입력받아 제어기(200)와 비실시간 제어영역에서 통신을 통해 데이터를 교환하도록 구성된다. In addition, the image processing device 130 is configured to receive image data captured by the camera 2 and exchange data with the controller 200 through communication in a non-real-time control area.
그리고 영상처리장치(130)는 샘플 수술도구이미지를 학습하여 수술도구 종류별로 분류된 수술도구 학습DB를 포함하고, 영상데이터 내의 수술도구를 인식하여 위치와 종류를 파악하여 제어기(200)와 비실시간 제어영역에서 데이터를 교환하도록 구성된다. In addition, the image processing device 130 learns sample surgical tool images, includes a surgical tool learning DB classified by type of surgical tool, recognizes the surgical tool in the image data, identifies the location and type, and connects the controller 200 and non-real-time It is configured to exchange data in the control domain.
그리고 음성명령처리장치(140)는 사용자로부터 음성데이터를 입력받는 마이크(6)와, 음성데이터를 입력받아 음성제어명령를 인식하고, 제어기와 비실시간 제어영역에서 통신을 통해 데이터를 교환하게 된다. Further, the voice command processing device 140 receives the voice data from the microphone 6 that receives voice data from the user, recognizes voice control commands, and exchanges data with the controller through communication in the non-real-time control area.
음성제어입력(은 디스플레이 영상을 기준으로한 위치명령이고, 제어기(200)는 음성제어입력을 기반으로 영상의 위치가 변경되도록 홀더 로봇(100)의 구동을 제어하게 된다. Voice control input ( is a position command based on the display image, and the controller 200 controls the driving of the holder robot 100 so that the position of the image is changed based on the voice control input.
그리고 음성명령처리장치(140)는, 사람 개인별로 특성을 학습하고 음성제어명령별로 분류된 음성명령 DB를 포함하고, 음성데이터에서 음성제어명령을 인식하여 제어기(200)와 비실시간 제어영역에서 데이터를 교환하도록 구성된다. In addition, the voice command processing device 140 learns the characteristics of each person and includes a voice command DB classified for each voice control command, and recognizes the voice control command from the voice data so that the controller 200 and the data in the non-real-time control area It is configured to exchange
또한 본 발명의 실시예에 따른 복강경 카메라 홀더 로봇 제어시스템은, 복강경(1)의 후방끝단에 경사각이 있는 경우, 이러한 경사각과, 영상데이터의 화면 좌표계와 복강경 끝단 좌표계 사이의 키네마틱 맵(kinematic map)을 기반으로 화면 좌표계 움직임에 따른 복강경 끝단 좌표계 움직임을 연산하는 복강경 경사각 보정부를 포함하여 구성된다. In addition, the laparoscopic camera holder robot control system according to an embodiment of the present invention, when there is an inclination angle at the rear end of the laparoscope 1, a kinematic map between the inclination angle, the screen coordinate system of the image data, and the laparoscope end coordinate system. ), it is configured to include a laparoscopic inclination angle correction unit that calculates the movement of the laparoscopic end coordinate system according to the movement of the screen coordinate system based on the screen coordinate system.
즉, 경사각이 있는 복강경(1)일 경우 복강경(1)이 직선운동을 하더라도 화면이 비스듬하게 움직여서 보이게 된다. 이를 해결하기 위해 복강경 축방향 회전 운동과 RCM 기준 직선운동의 보완이 필요하다. That is, in the case of the laparoscope 1 having an inclination angle, even if the laparoscope 1 moves in a straight line, the screen moves obliquely and is seen. In order to solve this problem, it is necessary to complement the laparoscopic axial rotation motion and the linear motion based on the RCM.
본 발명의 실시예에서는 장착된 복강경 경사각을 입력할 수 있는 장치(음성명령, 선택 버튼 등)를 갖고, 화면 좌표계와 복강경 끝단 좌표계 사이의 kinematic map을 기반으로 화면 좌표계 움직임에 따른 복강경 끝단 좌표계 움직임 계산하여, 복강경 끝단 좌표계 움직임 생성을 위한 로봇 움직임 생성하게 된다. In an embodiment of the present invention, a device (voice command, selection button, etc.) capable of inputting an installed laparoscopic inclination angle is provided, and based on a kinematic map between the screen coordinate system and the laparoscopic end coordinate system, the movement of the laparoscopic end coordinate system according to the movement of the laparoscopic end coordinate system is calculated. Thus, the robot motion for generating the laparoscopic distal coordinate system motion is generated.
그리고 도 11은 본 발명의 실시예에 따른 복강경 카메라 홀더 로봇 제어방법의 흐름도를 도시한 것이다. 또한 도 12는 본 발명의 실시예에 따른 음성명령모드에서 그리드 명령입력시 화면과, 4번을 음성명령한 경우 화면의 예를 나타낸 것이다. And Figure 11 shows a flow chart of a laparoscopic camera holder robot control method according to an embodiment of the present invention. 12 shows an example of a screen when a grid command is input in the voice command mode according to an embodiment of the present invention and a screen when a voice command is number 4.
기존적으로 본 발명의 실시예에 따른 복강경 카메라 홀더 로봇 제어시스템은 수동조작모드, 외부조작모드, 기본조작모드, 자동조작모드로 작동될 수 있으며, 기본조작모드는 풋페달과, 음성명령을 통해 조작되며 풋페달에 의한 제어는 앞서 언급한 바와같이, 실시간 제어영역으로서 다른 모드에 대해 최우선적으로 적용되게 된다. Conventionally, the laparoscopic camera holder robot control system according to an embodiment of the present invention can be operated in a manual operation mode, an external operation mode, a basic operation mode, and an automatic operation mode, and the basic operation mode is a foot pedal and a voice command. As mentioned above, the control by the foot pedal is applied with the highest priority for other modes as a real-time control area.
먼저, 어뎁터 장치(50)를 통해 복강경(1)을 홀더 로봇(100)에 장착한 후(S1), 수동조작모드를 통해 사용자가 홀더 로봇(100)의 위치를 조정하고(S2, S3), 복강경 카메라(2)에 의해 영상데이터가 디스플레이부(150)에서 디스플레이되게 된다. 예를 들어, 사용자가 로봇(100)이나 제어기(200)에 있는 “수동 모드” 버튼을 누른 후 로봇(100)의 위치를 조종할 수 있으며, 이때 수동조작모드 버튼에 불이 들어와 있고 버튼을 다시 누르면 불이 꺼지면서 수동조작모드는 해제되게 된다. First, after the laparoscope 1 is mounted on the holder robot 100 through the adapter device 50 (S1), the user adjusts the position of the holder robot 100 through the manual operation mode (S2, S3), Image data is displayed on the display unit 150 by the laparoscopic camera 2 . For example, the user can control the position of the robot 100 after pressing the “manual mode” button on the robot 100 or the controller 200. At this time, the manual operation mode button is lit and the button is pressed again. When pressed, the light turns off and the manual operation mode is released.
수동조작모드가 해제되면 기본조작모드가 실행되며, 실시간 제어영역에서 IMU센서(120)를 통해 측정된 자세데이터가 입력되며, 사용자는 풋페달(110)을 통해 제어신호를 입력하고 제어기(200) 실시간 제어영역에서 풋페달(110) 입력신호에 기반하여 홀더로봇(100)의 구동을 제어하게 된다(S5). When the manual operation mode is released, the basic operation mode is executed, the posture data measured through the IMU sensor 120 in the real-time control area is input, and the user inputs a control signal through the foot pedal 110 and the controller 200 In the real-time control area, the driving of the holder robot 100 is controlled based on the foot pedal 110 input signal (S5).
그리고 마이크(6)를 통해 음성데이터가 입력되면(S6), 음성명령처리장치(140)가 음성데이터에서 특정 음성제어명령를 인식하고, 제어기(200)는 음성제어명령에 기반하여 홀더로봇(100)의 구동을 제어하게 된다(S8). 이러한 음성명령모드에서 풋페달(110)에 의한 제어입력이 있으면(S7), 음성명령에 우선하여 풋페달에 의한 제어입력을 우선하게 된다.And when voice data is input through the microphone 6 (S6), the voice command processing device 140 recognizes a specific voice control command from the voice data, and the controller 200 operates the holder robot 100 based on the voice control command. The driving of is controlled (S8). In this voice command mode, if there is a control input by the foot pedal 110 (S7), the control input by the foot pedal takes precedence over the voice command.
즉, 기본조작모드는, 카메라 홀더 로봇(100)의 전원이 켜질 때 기본적으로 작동하는 모드로 실시간 제어영역에 직접 연결된 풋페달(110) 장치와 비실시간 제어영역에 연결된 음성명령을 수행할 수 있다. 두가지 외부 입력 장치가 모두 연결되어 있을 때는 실시간 영역에 연결된 풋페달 명령이 우선권이 있다. That is, the basic operation mode is a mode that basically operates when the power of the camera holder robot 100 is turned on, and the foot pedal 110 device directly connected to the real-time control area and the voice command connected to the non-real-time control area can be performed. . When both external input devices are connected, the foot pedal command connected to the real-time area takes precedence.
앞서 언급한 바와 같이, 음성명령처리장치(140)는 사용자로부터 음성데이터를 입력받는 마이크(6)와, 음성데이터를 입력받아 음성제어명령를 인식하고, 제어기와 비실시간 제어영역에서 통신을 통해 데이터를 교환하게 된다. As mentioned above, the voice command processing device 140 receives voice data from the microphone 6 that receives voice data from the user, recognizes voice control commands, and transmits data through communication with the controller in a non-real-time control area. will exchange
음성제어입력은 디스플레이 영상을 기준으로한 위치명령이고, 제어기(200)는 음성제어입력을 기반으로 영상의 위치가 변경되도록 홀더 로봇(100)의 구동을 제어하게 된다. The voice control input is a position command based on the display image, and the controller 200 controls the driving of the holder robot 100 so that the position of the image is changed based on the voice control input.
그리고 음성명령처리장치(140)는, 사람 개인별로 특성을 학습하고 음성제어명령별로 분류된 음성명령 DB를 포함하고, 음성데이터에서 음성제어명령을 인식하여 제어기(200)와 비실시간 제어영역에서 데이터를 교환하도록 구성된다. In addition, the voice command processing device 140 learns the characteristics of each person and includes a voice command DB classified for each voice control command, and recognizes the voice control command from the voice data so that the controller 200 and the data in the non-real-time control area It is configured to exchange
또한 음성명령모드에서, 도 12에 도시된 바와 같이, 음성제어명령에는 그리드 음성명령을 포함하며, 그리드 음성명령시 영상데이터를 복수개로 분할시키고 각 분할영역에 인덱스가 디스플레이되고, 사용자가 특정 인덱스를 음성입력하는 경우 특정인덱스 분할영역이 전체 화면이 되도록 제어기(200)는 홀더 로봇(100)의 구동을 제어하게 된다. In addition, in the voice command mode, as shown in FIG. 12, the voice control command includes a grid voice command, and at the time of the grid voice command, the image data is divided into a plurality of pieces, an index is displayed in each divided area, and the user selects a specific index. In the case of voice input, the controller 200 controls the driving of the holder robot 100 so that the specific index partition area becomes the entire screen.
즉, 음성명령모드에서, 무선으로 연결된 마이크(6)를 통해 사용자가 움직임 명령을 하게 되고, 사람 개인별로 특성을 학습할 수 있는 학습 알고리즘 제공하고 학습된 결과물을 음성명령처리장치(140)에 쉽게 업로드 할 수 있게 된다. That is, in the voice command mode, the user issues a movement command through the wirelessly connected microphone 6, provides a learning algorithm that can learn the characteristics of each person, and easily transfers the learned result to the voice command processing device 140. be able to upload.
예를 들어, 음성명령은 보여지는 영상을 기준으로 하며 영상의 위/아래, 오른쪽/왼쪽, 가까이/멀리, 오른쪽 회전/왼쪽 회전 명령으로 움직임 수 있다. 그리고 복강경(1) 움직임의 자유도에 따른 미세 조정 움직임의 음성 명령 체계 정의(위, 아래, 왼쪽, 오른쪽, 가까이, 멀리 등) 및 복강경 수술에 최적화된 스케일 칼리브레이션 알고리즘 제공하게 된다. 또한 사용자 요구사항에 맞도록 화면 움직임 즉 로봇 움직임 속도 최적화할 수 있다. For example, the voice command is based on the displayed image, and may be moved by up/down, right/left, near/far, right rotation/left rotation commands of the image. In addition, it provides a voice command system definition (up, down, left, right, near, far, etc.) of fine-tuning movements according to the degree of freedom of laparoscope (1) movement and a scale calibration algorithm optimized for laparoscopic surgery. In addition, screen movement, i.e. robot movement speed, can be optimized to meet user requirements.
또한 사용자가 외부조작모드를 지정하는 경우(S9), 외부조정장치(210)가 제어기(200)에 구비된 외부인터페이스(201)와 통신 연결되어 사용자에 의한 외부조정입력에 기반하여 제어기(200)가 복강경 로봇(100)의 구동을 제어하게 된다(S11). In addition, when the user designates an external operation mode (S9), the external control device 210 is communicatively connected to the external interface 201 provided in the controller 200, and the controller 200 operates based on an external control input by the user. is to control the driving of the laparoscopic robot 100 (S11).
이러한 외부조작모드는, 비실시간 제어영역에서 동작하며, 제어기(200)의 실시간 제어영역과 내부통신을 통해 데이터를 교환하고, 외부조정입력은 디스플레이 영상을 기준으로한 위치명령이고, 제어기(200)는 외부조정입력을 기반으로 영상의 위치가 변경되도록 홀더 로봇(100)의 구동을 제어하게 된다. This external operation mode operates in the non-real-time control area, exchanges data with the real-time control area of the controller 200 through internal communication, the external control input is a position command based on the display image, and the controller 200 controls the driving of the holder robot 100 so that the position of the image is changed based on the external adjustment input.
예를 들어, 사용자가 외부조종장치(210)에 있는 “외부 조작 모드”를 선택하면 외부조종장치(210)의 제어 입력(위치, 속도)을 받아서 카메라 홀더 로봇 제어기(200)를 통해 로봇(100)을 움직이게 된다. 이때 외부조종장치(210)는 정해진 프로토콜에 따라 데이터를 카메라 홀더 로봇 제어기(200)에 전달해야하며 관련 API 등은 제공한다. For example, when the user selects “external manipulation mode” in the external control device 210, control input (position, speed) of the external control device 210 is received and the robot 100 is controlled through the camera holder robot controller 200. ) to move. At this time, the external control device 210 must transmit data to the camera holder robot controller 200 according to a predetermined protocol, and related APIs are provided.
또한 사용자가 자동조작모드를 지정하는 경우(S11), 샘플 수술도구이미지를 학습하여 수술도구 종류별로 분류된 수술도구 학습DB를 구비한 영상처리장치(130)가, 영상데이터 내의 수술도구를 인식하여 위치와 종류를 파악하여 제어기(200)와 비실시간 제어영역에서 데이터를 교환하며, 제어기(200)는 인식된 수술도구가 영상데이터 내의 특정 영역 내에 유지되도록 홀더 로봇의 구동(100)을 제어하게 된다(S14). In addition, when the user designates the automatic operation mode (S11), the image processing device 130 having a surgical tool learning DB classified by type of surgical tool by learning the sample surgical tool image recognizes the surgical tool in the image data The location and type are identified and data is exchanged with the controller 200 in the non-real-time control area, and the controller 200 controls the drive 100 of the holder robot so that the recognized surgical tool is maintained within a specific area within the image data. (S14).
그리고 이러한 자동조작모드는, 사용자가 자동조작모드를 지정한 후, 영상데이터가 입력되고(S12), 영상처리장치(130)에 의해 수술도구가 인식된 경우(S13)에만 실행되게 된다. In addition, this automatic operation mode is executed only when the user designates the automatic operation mode, image data is input (S12), and the surgical tool is recognized by the image processing device 130 (S13).
이러한 본 발명의 실시예에 따른 복강경 카메라 홀더 로봇 제어시스템은 수동모드를 제외하고, 홀더 로봇(100)의 구동 우선순위는 풋페달(110) 입력, 음성제어명령, 외부조정입력, 자동조작모드 순이다. In the laparoscopic camera holder robot control system according to an embodiment of the present invention, except for the manual mode, the driving priority of the holder robot 100 is foot pedal 110 input, voice control command, external adjustment input, and automatic operation mode in order. am.
이하에서는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템, 및 가이드 방법에 대해 설명하도록 한다. 이러한 인공지능 수술 가이드는 앞서 언급한 제어방법에서 기본적으로 자동조작모드에서 동작되게 된다. Hereinafter, an image information-based laparoscopic robot artificial intelligence surgery guide system and a guide method will be described. These artificial intelligence surgical guides are basically operated in an automatic operation mode in the aforementioned control method.
도 13은 본 발명의 실시예에 따른 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템의 블록도를 도시한 것이다. 그리고 도 14는 본 발명의 실시예에 따른 데이터수집부의 블록도, 도 15는 본 발명의 실시예에 따른 학습 DB의 블록도, 도 16은 본 발명의 실시예에 따른 가이드모니터링부의 블록도를 도시한 것이다. 13 is a block diagram of a laparoscopic robot artificial intelligence surgery guide system based on image information according to an embodiment of the present invention. 14 is a block diagram of a data collection unit according to an embodiment of the present invention, FIG. 15 is a block diagram of a learning DB according to an embodiment of the present invention, and FIG. 16 is a block diagram of a guide monitoring unit according to an embodiment of the present invention. it did
본 발명의 실시예에 따른 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템은, 복강경 카메라 홀더 로봇(100)의 복강경 카메라(1)에서 촬상된 영상데이터를 기반으로 수술 과정을 모니터링하여 수술을 가이드할 수 있는 시스템이다. The image information-based laparoscopic robot artificial intelligence surgery guide system according to an embodiment of the present invention can guide surgery by monitoring the surgical process based on image data captured by the laparoscopic camera 1 of the laparoscopic camera holder robot 100. It is a system that has
본 발명의 실시예에 따른 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템은, 도 13에 도시된 바와 같이, 앞서 언급한 제어시스템에서, 데이터 수집부, 데이터 학습부, 학습 DB, 가이드 모니터링부, 알림수단 등을 더 포함하여 구성됨을 알 수 있다. As shown in FIG. 13, the image information-based laparoscopic robot artificial intelligence surgery guide system according to an embodiment of the present invention, in the aforementioned control system, data collection unit, data learning unit, learning DB, guide monitoring unit, notification It can be seen that it is configured to further include means and the like.
데이터 수집부는 도 14에 도시된 바와 같이, 수술영상데이터(311), 샘플수술도구이미지(312), 샘플제거대상도구이미지(313), 음성데이터(314)를 수집하도록 구성된다. As shown in FIG. 14 , the data collection unit is configured to collect surgical image data 311 , sample surgical tool image 312 , sample removal target tool image 313 , and audio data 314 .
그리고 데이터학습부는 수집된 수술영상데이터(311), 샘플수술도구이미지(312), 샘플제거대상도구이미지(313), 음성데이터(314)를 학습하며, 학습된 데이터 각각은 학습DB(330)의 수술영상학습 DB(331), 수술도구학습 DB(332), 제거대상도구 학습DB(333), 음성명령DB(334)에 저장되게 된다. And the data learning unit learns the collected surgical image data 311, sample surgical tool image 312, sample removal target tool image 313, and audio data 314, and each of the learned data is of the learning DB 330. It is stored in the surgical image learning DB 331, the surgical tool learning DB 332, the removal target tool learning DB 333, and the voice command DB 334.
영상처리장치(130)는 카메라에서 촬영되는 현재 수술영상데이터를 입력받아 홀더로봇(100)의 구동을 제어하는 제어기(200)와 비실시간 제어영역에서 통신을 통해 데이터를 교환하게 된다. The image processing device 130 receives the current surgical image data captured by the camera and exchanges data with the controller 200 that controls the driving of the holder robot 100 through communication in a non-real-time control area.
데이터 학습부는 수집된 수술영상데이터(311)를 학습하여 수술종류별, 수술자별로 분류하여 수술영상학습 DB에 수술학습데이터를 저장하도록 구성된다. 수술학습데이터는 수술종류 별, 수술자 별로 수술 시퀀스 특징이 학습되게 된다. The data learning unit is configured to learn the collected surgical image data 311, classify the surgical image data by surgery type and operator, and store the surgical learning data in the surgical image learning DB. In the surgical learning data, the surgical sequence characteristics are learned by surgery type and by operator.
또한 데이터 학습부(320)는 샘플 수술도구이미지(312)를 학습하여 수술도구 종류별로 분류하여 수술도구 학습DB(332)에 저장하게 된다. 영상처리장치(130)는 현재 수술영상데이터 내의 수술도구를 인식하여 위치와 종류를 파악하여 제어기(200)와 비실시간 제어영역에서 데이터를 교환하도록 구성된다. 또한 수술학습데이터는 수술 시퀀스에 따른 수술도구의 위치, 방향의 특징이 학습될 수 있다. In addition, the data learning unit 320 learns the sample surgical instrument images 312, classifies them by surgical instrument type, and stores them in the surgical instrument learning DB 332. The image processing device 130 is configured to exchange data with the controller 200 in a non-real-time control area by recognizing a surgical tool in the current surgical image data to determine the location and type. In addition, the surgical learning data can learn the characteristics of the position and direction of the surgical tool according to the surgical sequence.
그리고 데이터 학습부(320)는 샘플 제거대상도구이미지(313)를 학습하여 제거대상도구 종류별로 분류하여, 제거대상도구 학습DB(333)에 저장하게 된다. 영상처리장치(130)는 현재 수술영상데이터 내의 제거대상도구를 인식하여 위치와 종류를 파악하여 제어기(200)와 비실시간 제어영역에서 데이터를 교환하게 된다, In addition, the data learning unit 320 learns the sample removal target tool images 313, classifies them according to removal target tool types, and stores them in the removal target tool learning DB 333. The image processing device 130 recognizes the tool to be removed in the current surgical image data, identifies the location and type, and exchanges data with the controller 200 in the non-real-time control area.
또한 데이터 학습부(320)는 음성데이터에서 사람 개인별로 특성을 학습하고 음성제어명령별로 분류하여 음성명령 DB(334)에 저장하게 된다. 음성명령처리장치(140)는 음성데이터에서 음성제어명령을 인식하여 제어기(200)와 비실시간 제어영역에서 데이터를 교환하게 된다. In addition, the data learning unit 320 learns the characteristics of each person from the voice data, classifies them according to voice control commands, and stores them in the voice command DB 334. The voice command processing device 140 recognizes a voice control command from voice data and exchanges data with the controller 200 in a non-real-time control area.
그리고 가이드모니터링부(350)는 기본적으로 학슴 DB(330)에 저장된 수술학습데이터와, 카메라에서 촬영되는 현재 수술영상데이터를 대비하여, 수술가이드데이터를 생성하도록 구성되며, 알림수단(340)은 이러한 수술가이드데이터를 안내하도록 구성된다. In addition, the guide monitoring unit 350 is basically configured to generate surgical guide data by comparing the surgical learning data stored in the learning DB 330 with the current surgical image data captured by the camera, and the notification means 340 It is configured to guide the surgical guide data.
구체적으로 가이드모니터링부(350)는, 도 16에 도시된 바와 같이, 현재 수술영상데이터를 기반으로 비교분석의 대상이 되는 수술학습데이터를 검출하는 검색엔진(351)과, 현재 수술영상데이터와 수술학습데이터를 실시간으로 비교분석하는 비교분석부(352)와, 비교분석부의 비교분석에 따라 시퀀스가 누락되거나 현재 수술영상데이터가 수술학습데이터 대비 임계치 이상의 변화가 존재하는지에 대한 이벤트를 판단하는 이벤트 판단부(353)를 포함하여 구성됨을 알 수 있다. Specifically, as shown in FIG. 16, the guide monitoring unit 350 includes a search engine 351 that detects surgical learning data to be compared and analyzed based on the current surgical image data, and the current surgical image data and surgery A comparison and analysis unit 352 that compares and analyzes the learning data in real time, and an event judgment that determines whether a sequence is missing or a change of the current surgical image data compared to the surgical learning data exceeds a threshold according to the comparison and analysis of the comparison and analysis unit 352. It can be seen that it is configured to include the unit 353.
또한 알림수단(340)은 이벤트 발생시 알림신호를 송출하도록 구성된다. In addition, the notification means 340 is configured to transmit a notification signal when an event occurs.
그리고 이벤트 발생시, 제어기(200)는 이벤트가 발생된 위치의 영상을 촬영하도록 홀더 로봇(200)의 구동을 제어하게 된다. And when an event occurs, the controller 200 controls the driving of the holder robot 200 to capture an image of the location where the event occurred.
앞서 언급한 바와 같이, 수술학습데이터는 수술 시퀀스에 따른 수술도구의 위치, 방향의 특징이 학습될 수 있다. 따라서 비교판단부(352)는 수술학습데이터의 수술 시퀀스에 따른 수술도구의 위치, 방향의 특징과, 현재 수술영상데이터 내의 수술도구를 비교분석하고, 이벤트판단부(353)는 시퀀스에 따른 수술도구 위치, 방향의 특징이 임계치 이상의 변화가 존재하는지에 대한 이벤트를 판단할 수 있다. As mentioned above, the surgical learning data can be learned the characteristics of the position and direction of the surgical tool according to the surgical sequence. Therefore, the comparison and determination unit 352 compares and analyzes the characteristics of the position and direction of the surgical instruments according to the surgical sequence of the surgical learning data and the surgical instruments in the current surgical image data, and the event determination unit 353 compares and analyzes the surgical instruments according to the sequence It is possible to determine an event about whether there is a change in the characteristics of location and direction that exceeds a threshold value.
또한 앞서 언급한 바와 같이, 샘플 제거대상도구이미지(313)를 학습하여 제거대상도구 종류별로 분류된 제거대상도구 학습DB(333)를 포함하여, 영상처리장치(130)는 현재 수술영상데이터 내의 제거대상도구를 인식하여 위치와 종류를 파악하여 제어기(200)와 비실시간 제어영역에서 데이터를 교환할 수 있다. In addition, as mentioned above, the image processing device 130 includes the removal target tool learning DB 333 classified by removal target tool type by learning the sample removal target tool image 313, and the image processing device 130 removes the current surgical image data. Data can be exchanged with the controller 200 in the non-real-time control area by recognizing the target tool and figuring out the location and type.
그리고 제어기(200)는, 제거대상도구가 인식된 경우, 수술완료 직전 제거대상도구의 위치를 촬상하도록 홀더 로봇(200)의 구동을 제어하게 된다. Also, when the tool to be removed is recognized, the controller 200 controls driving of the holder robot 200 to capture an image of the position of the tool to be removed right before the surgery is completed.
또한 제거대상도구가 인식된 경우, 제거대상도구의 제거여부를 판단하는 제거여부판단부(356)를 포함하여, 제어기(200)는 수술완료 직전 제거대상도구가 제거되지 않았다고 판단된 경우 제거대상도구를 촬상하도록 홀더 로봇(100)의 구동을 제어하게 된다. In addition, when the tool to be removed is recognized, the controller 200 includes a removal decision unit 356 that determines whether the tool to be removed is removed, and when it is determined that the tool to be removed is not removed right before the surgery is completed, the tool to be removed is removed. The driving of the holder robot 100 is controlled to take an image.
또한 음성명령처리장치(140)는 사용자로부터 음성데이터를 입력받는 마이크(6)로부터 음성데이터를 입력받아 음성제어명령를 인식하고, 제어기(200)와 비실시간 제어영역에서 통신을 통해 데이터를 교환하도록 구성된다. In addition, the voice command processing device 140 is configured to receive voice data from the microphone 6 that receives voice data from the user, recognize voice control commands, and exchange data with the controller 200 through communication in a non-real-time control area. do.
음성제어입력은 디스플레이 영상을 기준으로한 위치명령이고, 제어기(200)는 음성제어입력을 기반으로 영상의 위치가 변경되도록 상기 홀더 로봇의 구동을 제어한다. 앞서 언급한 바와 같이, 음성명령 DB에는 사람 개인별로 특성을 학습하고 음성제어명령별로 분류되어 있으며, 음성데이터에서 음성제어명령을 인식하여 제어기와 비실시간 제어영역에서 데이터를 교환하게 된다. The voice control input is a position command based on the display image, and the controller 200 controls the driving of the holder robot to change the position of the image based on the voice control input. As mentioned above, in the voice command DB, characteristics of each person are learned and classified according to voice control commands, and voice control commands are recognized from voice data to exchange data with a controller in a non-real-time control area.
또한 로봇자세 저장부(357)는 수술중, 해당 시점 또는 특정 시간 범위 동안 로봇의 자세를 저장하도록 명령하도록 구성된다. 즉, 음성명령 또는 풋페달(110) 등에 의해 저장명령이 입력되면, 그 시점에서의 로봇자세를 저장하게 된다. 따라서 제어기는 사용자의 요청에 따라 저장된 로봇 자세로 전환되도록 홀더 로봇의 구동을 제어할 수 있다. In addition, the robot posture storage unit 357 is configured to command to store the posture of the robot during surgery, at a corresponding time point, or during a specific time range. That is, when a storage command is input by a voice command or the foot pedal 110, the robot posture at that time is stored. Therefore, the controller can control the driving of the holder robot to be converted to the stored robot posture according to the user's request.
즉, 음성명령이나 풋페달(110)을 통해 기억하기 위한 순간의 로봇 자세를 저장하고, 후에 사용자의 요청에 따라 기억하고 있는 순간 영상 모습을 보여줄 수 있도록 구동이 제어된다. 또한 이전 위치로 이동 명령시 현재 위치를 자동으로 저장 후 이전 위치 복귀 명령에 따라 이동이 가능하다. That is, the driving is controlled so that the momentary robot posture to be memorized is stored through a voice command or the foot pedal 110, and the memorized instantaneous image is later displayed according to the user's request. Also, when a command to move to the previous location is given, the current location is automatically saved and then moved according to the command to return to the previous location.
또한 수술 도중 변화된 환경(장기의 이동) 및 현재 수술 도구 위치 등으로 목표 지점으로 이동할 수 없는 상황을 파악하고 및 화면에 이동시 장애물이 되는 물체 표시할 수 있다. In addition, a situation in which movement to a target point cannot be achieved due to a changed environment (movement of organs) during surgery and the current position of a surgical tool may be identified and an object that becomes an obstacle during movement may be displayed on the screen.
즉, 본 발명의 실시예에 따른 장애물 인식부는 수술도구 DB 및 상기 제거대상도구 DB에 매칭되지 않은 도구가 현재 수술영상데이터에 존재하는 경우, 장애물로 인식하여 수술영상데이터 내에 표시하도록 구성될 수 있다. That is, the obstacle recognition unit according to an embodiment of the present invention may be configured to recognize a surgical tool DB and a tool that does not match the removal target tool DB as an obstacle and display it in the surgical image data when a tool that does not match the current surgical image data exists. .

Claims (11)

  1. 복강경 카메라 홀더 로봇의 복강경 카메라에서 촬상된 영상데이터를 기반으로 수술 과정을 모니터링하여 수술을 가이드하는 시스템에 있어서, In the system for guiding surgery by monitoring the surgical process based on image data captured by the laparoscopic camera of the laparoscopic camera holder robot,
    수술영상데이터를 수집하는 데이터수집부; Data collection unit for collecting surgical image data;
    수집된 수술영상데이터를 학습하여 수술종류별, 수술자별로 분류하여 수술학습데이터를 저장하는 수술영상학습 DB; a surgical image learning DB that stores the surgical learning data by learning the collected surgical image data and classifying them by surgery type and operator;
    상기 카메라에서 촬영되는 현재 수술영상데이터를 입력받아 상기 홀더로봇의 구동을 제어하는 제어기와 비실시간 제어영역에서 통신을 통해 데이터를 교환하는 영상처리장치;an image processing device that receives current surgical image data captured by the camera and exchanges data through communication in a non-real-time control area with a controller that controls driving of the holder robot;
    상기 수술학습데이터를 기반으로 수술종류과 수술자별 맞춤형 수술가이드데이터를 생성하는 가이드모니터링부; 및a guide monitoring unit that generates customized surgical guide data for each type of surgery and for each operator based on the surgical learning data; and
    상기 제어기는 상기 수술가이드데이터를 기반으로 상기 홀더로봇의 구동을 제어하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.The controller is an image information-based laparoscopic robot artificial intelligence surgery guide system, characterized in that for controlling the driving of the holder robot based on the surgical guide data.
  2. 제 1항에 있어서, According to claim 1,
    상기 수술학습데이터는 수술종류 별, 수술자 별로 수술 시퀀스 특징이 학습되고, In the surgical learning data, surgical sequence characteristics are learned by surgery type and by operator,
    상기 수술가이드데이터를 안내하는 알림수단;을 더 포함하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.An image information-based laparoscopic robot artificial intelligence surgery guide system further comprising a notification means for guiding the surgical guide data.
  3. 제 2항에 있어서, According to claim 2,
    상기 가이드모니터링부는, The guide monitoring unit,
    현재 수술영상데이터와, 상기 수술학습데이터를 실시간으로 비교분석하는 비교분석부; 및a comparison analysis unit that compares and analyzes the current surgical image data and the surgical learning data in real time; and
    상기 비교분석부의 비교분석에 따라, 시퀀스가 누락되거나, 상기 현재 수술영상데이터가 상기 수술학습데이터 대비 임계치 이상의 변화가 존재하는지에 대한 이벤트를 판단하는 이벤트 판단부;를 포함하고, An event judgment unit for determining an event whether a sequence is missing or a change of the current surgical image data compared to the surgical learning data exceeds a threshold value according to the comparative analysis of the comparative analysis unit; and
    상기 알림수단은 상기 이벤트 발생시 알림신호를 송출하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.The notification means is an image information-based laparoscopic robot artificial intelligence surgery guide system, characterized in that for transmitting a notification signal when the event occurs.
  4. 제 3항에 있어서, According to claim 3,
    상기 이벤트 발생시, 상기 제어기는 상기 이벤트가 발생된 위치의 영상을 촬영하도록 상기 홀더 로봇의 구동을 제어하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.When the event occurs, the controller controls the driving of the holder robot to capture an image of the location where the event occurs. Image information-based laparoscopic robot artificial intelligence surgery guide system.
  5. 제 4항에 있어서, According to claim 4,
    샘플 수술도구이미지를 학습하여 수술도구 종류별로 분류된 수술도구 학습DB를 포함하고, 상기 영상처리장치는 상기 현재 수술영상데이터 내의 수술도구를 인식하여 위치와 종류를 파악하여 상기 제어기와 비실시간 제어영역에서 데이터를 교환하며, It includes a surgical tool learning DB classified by type of surgical tool by learning sample surgical tool images, and the image processing device recognizes the surgical tool in the current surgical image data and identifies the location and type of the surgical tool to determine the location and type of the surgical tool and the controller and the non-real-time control area. exchange data in
    상기 수술학습데이터는 수술 시퀀스에 따른 수술도구의 위치, 방향의 특징이 학습되고, In the surgical learning data, the characteristics of the position and direction of the surgical tool according to the surgical sequence are learned,
    상기 비교판단부는 수술학습데이터의 수술 시퀀스에 따른 수술도구의 위치, 방향의 특징과, 현재 수술영상데이터 내의 수술도구를 비교분석하고, 상기 이벤트판단부는 시퀀스에 따른 수술도구 위치, 방향의 특징이 임계치 이상의 변화가 존재하는지에 대한 이벤트를 판단하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.The comparison and determination unit compares and analyzes the position and direction characteristics of surgical instruments according to the surgical sequence of the surgical learning data and the surgical instruments in the current surgical image data, and the event determination unit compares and analyzes the characteristics of the position and direction of the surgical instruments according to the sequence as a threshold value. An image information-based laparoscopic robot artificial intelligence surgery guide system characterized in that it determines the event for the existence of abnormal changes.
  6. 제 5항에 있어서, According to claim 5,
    샘플 제거대상도구이미지를 학습하여 제거대상도구 종류별로 분류된 제거대상도구 학습DB를 포함하고, 상기 영상처리장치는 상기 현재 수술영상데이터 내의 제거대상도구를 인식하여 위치와 종류를 파악하여 상기 제어기와 비실시간 제어영역에서 데이터를 교환하며, A tool to be removed learning DB classified by type of tool to be removed by learning sample tool to be removed images is included, and the image processing device recognizes the tool to be removed in the current surgical image data to identify the location and type of the tool to be removed, and the controller and Exchange data in the non-real-time control area,
    상기 제어기는, 상기 제거대상도구가 인식된 경우, 수술완료 직전 상기 제거대상도구의 위치를 촬상하도록 상기 홀더 로봇의 구동을 제어하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.The controller, when the tool to be removed is recognized, controls the driving of the holder robot to image the position of the tool to be removed immediately before surgery is completed.
  7. 제 6항에 있어서, According to claim 6,
    상기 제거대상도구가 인식된 경우, 상기 제거대상도구의 제거여부를 판단하는 제거여부판단부를 더 포함하고, 상기 제어기는 수술완료 직전 상기 제거대상도구가 제거되지 않았다고 판단된 경우 상기 제거대상도구를 촬상하도록 상기 홀더 로봇의 구동을 제어하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.The controller further includes a removal decision unit for determining whether the tool to be removed is removed when the tool to be removed is recognized, and the controller captures an image of the tool to be removed when it is determined that the tool to be removed is not removed immediately before completion of the operation. Image information-based laparoscopic robot artificial intelligence surgery guide system, characterized in that for controlling the driving of the holder robot so as to.
  8. 제 7항에 있어서, According to claim 7,
    사용자로부터 음성데이터를 입력받는 마이크와, 상기 음성데이터를 입력받아 음성제어명령를 인식하고, 상기 제어기와 비실시간 제어영역에서 통신을 통해 데이터를 교환하는 음성명령처리장치;를 더 포함하고, A microphone for receiving voice data from a user, and a voice command processing device for receiving the voice data, recognizing a voice control command, and exchanging data with the controller through communication in a non-real-time control area;
    상기 음성제어입력은 디스플레이 영상을 기준으로한 위치명령이고, 상기 제어기는 상기 음성제어입력을 기반으로 상기 영상의 위치가 변경되도록 상기 홀더 로봇의 구동을 제어하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.The voice control input is a position command based on the display image, and the controller controls the driving of the holder robot so that the position of the image is changed based on the voice control input. Intelligent surgical guide system.
  9. 제 8항에 있어서, According to claim 8,
    상기 음성명령처리장치는, 사람 개인별로 특성을 학습하고 음성제어명령별로 분류된 음성명령 DB를 포함하고, 상기 음성데이터에서 음성제어명령을 인식하여 상기 제어기와 비실시간 제어영역에서 데이터를 교환하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.The voice command processing device includes a voice command DB that learns characteristics of each person and is classified for each voice control command, recognizes a voice control command from the voice data, and exchanges data with the controller in a non-real-time control area. An image information-based laparoscopic robot artificial intelligence surgical guide system.
  10. 제 7항에 있어서, According to claim 7,
    수술중, 해당 시점 또는 특정 시간 범위 동안 로봇의 자세를 저장하도록 명령하는 로봇자세 저장부;를 더 포함하고, 상기 제어기는 사용자의 요청에 따라 상기 저장된 로봇 자세로 전환되도록 상기 홀더 로봇의 구동을 제어하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.It further includes a robot posture storage unit for commanding to store the posture of the robot at a specific point in time or during a specific time range during surgery, and the controller controls driving of the holder robot to be converted to the stored robot posture according to a user's request. Image information-based laparoscopic robot artificial intelligence surgery guide system, characterized in that.
  11. 제 10항에 있어서, According to claim 10,
    상기 수술도구 DB 및 상기 제거대상도구 DB에 매칭되지 않은 도구가 현재 수술영상데이터에 존재하는 경우 장애물로 인식하여 상기 수술영상데이터 내에 표시하는 장애물 인식부를 더 포함하는 것을 특징으로 하는 영상정보기반 복강경 로봇 인공지능 수술 가이드 시스템.An image information-based laparoscopic robot further comprising an obstacle recognizing unit that recognizes a tool that does not match the surgical tool DB and the removal target tool DB as an obstacle when it exists in the current surgical image data and displays it in the surgical image data. Artificial intelligence surgical guide system.
PCT/KR2021/011021 2021-08-19 2021-08-19 Image information-based laparoscope robot artificial intelligence surgery guide system WO2023022258A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2021-0109257 2021-08-19
KR1020210109257A KR102627401B1 (en) 2021-08-19 2021-08-19 Laparoscopic Robot Artificial Intelligence Surgery Guide System based on image information

Publications (1)

Publication Number Publication Date
WO2023022258A1 true WO2023022258A1 (en) 2023-02-23

Family

ID=85239897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/011021 WO2023022258A1 (en) 2021-08-19 2021-08-19 Image information-based laparoscope robot artificial intelligence surgery guide system

Country Status (2)

Country Link
KR (1) KR102627401B1 (en)
WO (1) WO2023022258A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101114234B1 (en) * 2011-05-18 2012-03-05 주식회사 이턴 Surgical robot system and laparoscope handling method thereof
WO2017175232A1 (en) * 2016-04-07 2017-10-12 M.S.T. Medical Surgery Technologies Ltd. Vocally activated surgical control system
KR20180100831A (en) * 2017-03-02 2018-09-12 한국전자통신연구원 Method for controlling view point of surgical robot camera and apparatus using the same
US20190008598A1 (en) * 2015-12-07 2019-01-10 M.S.T. Medical Surgery Technologies Ltd. Fully autonomic artificial intelligence robotic system
KR20190133424A (en) * 2018-05-23 2019-12-03 (주)휴톰 Program and method for providing feedback about result of surgery
WO2020159978A1 (en) * 2019-01-31 2020-08-06 Intuitive Surgical Operations, Inc. Camera control systems and methods for a computer-assisted surgical system

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7794396B2 (en) 2006-11-03 2010-09-14 Stryker Corporation System and method for the automated zooming of a surgical camera
KR101997566B1 (en) 2012-08-07 2019-07-08 삼성전자주식회사 Surgical robot system and control method thereof
US9827054B2 (en) * 2014-03-14 2017-11-28 Synaptive Medical (Barbados) Inc. Intelligent positioning system and methods therefore
KR101926123B1 (en) * 2017-12-28 2018-12-06 (주)휴톰 Device and method for segmenting surgical image
KR101864411B1 (en) * 2017-12-28 2018-06-04 (주)휴톰 Program and method for displaying surgical assist image
KR102146672B1 (en) * 2018-05-23 2020-08-21 (주)휴톰 Program and method for providing feedback about result of surgery
KR102008891B1 (en) * 2018-05-29 2019-10-23 (주)휴톰 Apparatus, program and method for displaying surgical assist image
US10383694B1 (en) * 2018-09-12 2019-08-20 Johnson & Johnson Innovation—Jjdc, Inc. Machine-learning-based visual-haptic feedback system for robotic surgical platforms
GB2612245B (en) * 2018-10-03 2023-08-30 Cmr Surgical Ltd Automatic endoscope video augmentation
KR102195825B1 (en) 2018-12-12 2020-12-28 (주)헬스허브 System for guiding surgical operation through alarm function and method therefor
KR102572006B1 (en) * 2019-02-21 2023-08-31 시어터 인코포레이티드 Systems and methods for analysis of surgical video
KR102239186B1 (en) 2019-07-26 2021-04-12 한국생산기술연구원 System and method for automatic control of robot manipulator based on artificial intelligence
US20210059758A1 (en) * 2019-08-30 2021-03-04 Avent, Inc. System and Method for Identification, Labeling, and Tracking of a Medical Instrument

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101114234B1 (en) * 2011-05-18 2012-03-05 주식회사 이턴 Surgical robot system and laparoscope handling method thereof
US20190008598A1 (en) * 2015-12-07 2019-01-10 M.S.T. Medical Surgery Technologies Ltd. Fully autonomic artificial intelligence robotic system
WO2017175232A1 (en) * 2016-04-07 2017-10-12 M.S.T. Medical Surgery Technologies Ltd. Vocally activated surgical control system
KR20180100831A (en) * 2017-03-02 2018-09-12 한국전자통신연구원 Method for controlling view point of surgical robot camera and apparatus using the same
KR20190133424A (en) * 2018-05-23 2019-12-03 (주)휴톰 Program and method for providing feedback about result of surgery
WO2020159978A1 (en) * 2019-01-31 2020-08-06 Intuitive Surgical Operations, Inc. Camera control systems and methods for a computer-assisted surgical system

Also Published As

Publication number Publication date
KR20230028818A (en) 2023-03-03
KR102627401B1 (en) 2024-01-23

Similar Documents

Publication Publication Date Title
US9630323B2 (en) Operation support system and control method of operation support system
JP3506809B2 (en) Body cavity observation device
KR102105142B1 (en) Switching control of an instrument to an input device upon the instrument entering a display area viewable by an operator of the input device
JP4179846B2 (en) Endoscopic surgery system
CN108348134B (en) Endoscope system
WO2010021447A1 (en) Three-dimensional display system for surgical robot and method for controlling same
KR20140022907A (en) Estimation of a position and orientation of a frame used in controlling movement of a tool
WO2011149260A2 (en) Rcm structure for a surgical robot arm
WO2023197941A1 (en) Surgical implantation imaging method and imaging system
WO2023022258A1 (en) Image information-based laparoscope robot artificial intelligence surgery guide system
WO2023022257A1 (en) System and method for controlling laparoscope camera holder robot
WO2013018985A1 (en) Surgical robot system
JP3744974B2 (en) Endoscopic surgical device
JPH09266882A (en) Endoscope device
JP4382894B2 (en) Field of view endoscope system
JP4554027B2 (en) Ophthalmic equipment
CN110549328A (en) Job support control device and method, job image control device and display method
KR102535861B1 (en) Laparoscopic camera holder Robot having adapter and remote center of motion structure
CN217938392U (en) Surgical implant imaging system
WO2022031069A1 (en) Laparoscopic surgical endoscope module having coating layer applied thereon
WO2019045531A2 (en) Medical arm assembly
WO2011145803A2 (en) Medical device for surgery
CN218606817U (en) Automatic distance adjusting mirror supporting mechanical arm device
CN217286107U (en) Remote operation microscope equipment
WO2022230814A1 (en) Robot system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21954305

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE