WO2022073290A1 - 手术机器人及其图形化控制装置、图形化显示方法 - Google Patents

手术机器人及其图形化控制装置、图形化显示方法 Download PDF

Info

Publication number
WO2022073290A1
WO2022073290A1 PCT/CN2020/133490 CN2020133490W WO2022073290A1 WO 2022073290 A1 WO2022073290 A1 WO 2022073290A1 CN 2020133490 W CN2020133490 W CN 2020133490W WO 2022073290 A1 WO2022073290 A1 WO 2022073290A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
projection
virtual camera
feature point
arm
Prior art date
Application number
PCT/CN2020/133490
Other languages
English (en)
French (fr)
Inventor
高元倩
王建辰
Original Assignee
深圳市精锋医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市精锋医疗科技有限公司 filed Critical 深圳市精锋医疗科技有限公司
Priority to EP20956605.8A priority Critical patent/EP4218652A1/en
Priority to US18/030,919 priority patent/US20240065781A1/en
Publication of WO2022073290A1 publication Critical patent/WO2022073290A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/37Master-slave robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2059Mechanical position encoders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/302Surgical robots specifically adapted for manipulations within body cavities, e.g. within abdominal or thoracic cavities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/034Recognition of patterns in medical or anatomical images of medical instruments
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present application relates to the field of medical devices, and in particular, to a surgical robot and its graphical control device and graphical display method.
  • Minimally invasive surgery refers to a surgical method that uses modern medical instruments such as laparoscope and thoracoscope and related equipment to perform surgery inside the human cavity. Compared with traditional surgical methods, minimally invasive surgery has the advantages of less trauma, less pain, and faster recovery.
  • the surgical robot includes a master operation table and a slave operation device, and the slave operation device includes a plurality of operation arms, and the operation arms include a camera arm with an image end instrument and a surgical arm with an operation end instrument.
  • the main console includes a display and a handle. The doctor operates the handle to control the movement of the camera arm or the surgical arm under the field of view provided by the camera arm displayed on the monitor.
  • the field of view provided by the camera arm 34A' can often only observe a local area of the operating arm 34B', and the area that can be observed is the visible area. view area.
  • the doctor cannot observe the state of the camera arm 34A' itself in the visible area, and cannot observe the collision or potential collision between the surgical arms 34B' or between the surgical arm 34B' and the camera arm 34A' in the invisible area. This situation is likely to cause surgical safety problems.
  • the present application provides a surgical robot, comprising: an input part; a display; an operating arm, including a plurality of joints and sensors for sensing joint variables of the joints, the operating arm has an orderly arranged for a feature point sequence formed by associating a plurality of feature points corresponding to the joints; and a controller, which is coupled to the input part, the display and the sensor, and is configured to: obtain the feature points of the operating arm sequence and its corresponding kinematic model; acquire joint variables sensed by the sensor, and acquire the virtual camera selected by the input unit; determine each feature point in the feature point sequence according to the kinematic model and the joint variable Projection points on the projection plane of the virtual camera; orderly fitting and connecting the projection points to generate a projection image of the operating arm; displaying the projection image on the display.
  • the controller is configured to: according to the The kinematic model and the joint variables obtain the first position of each feature point in the feature point sequence under the reference coordinate system; each of the first positions is respectively converted into a second position under the virtual camera coordinate system obtaining the virtual focal length of the virtual camera and determining the projection plane of the virtual camera according to the virtual focal length; obtaining the projection points of each of the second positions on the projection plane according to the virtual focal length.
  • the controller is configured to: according to the The kinematic model and the joint variables obtain the first position of each feature point in the feature point sequence under the reference coordinate system; each of the first positions is respectively converted into a second position under the virtual camera coordinate system obtaining the outline information of the joint corresponding to each of the feature points; combining the virtual focal length and the outline information to obtain the projection point of each of the second positions on the projection plane.
  • the controller in the step of orderly fitting and connecting the projection points to generate the projection image of the operating arm, is configured to: fit and connect the projection points in an orderly manner in combination with the contour information A projected image of the manipulator arm is generated.
  • the controller in the step of orderly fitting and connecting the projection points to generate the projection image of the operating arm, is configured to: according to the feature points corresponding to the projection points, in the sequence of feature points Connect each of the projection points in an orderly manner to generate a projection image of the operating arm.
  • the controller is configured to: acquire the icon of the end device of the manipulating arm; according to the joint variable and the kinematics model to determine the pose of the end device on the projection plane of the virtual camera; rotate and/or scale the icon according to the pose of the end device on the projection plane of the virtual camera ; splicing the processed icon on the projection point at the far end to generate the projection image.
  • the controller in the step of acquiring the icon of the terminal device of the operating arm, is configured to: acquire the type of the operating arm, and match the icon of the terminal device of the operating arm according to the type.
  • the virtual camera has a selectable virtual focal length and/or virtual aperture
  • the controller determines, according to the kinematics model and the joint variables, that each feature point in the feature point sequence is within the range of the virtual camera.
  • the controller determines, according to the kinematics model and the joint variables, that each feature point in the feature point sequence is within the range of the virtual camera.
  • the controller determines, according to the kinematics model and the joint variables, that each feature point in the feature point sequence is within the range of the virtual camera.
  • it is configured to: obtain the virtual focal length and/or virtual aperture of the virtual camera selected by the input unit, and combine the virtual focal length and/or virtual aperture, the kinematic model and The joint variable determines the projection point of each feature point in the feature point sequence on the projection plane of the virtual camera.
  • the controller is configured to, before the step of displaying the projected image in the display, perform: detecting whether the projected image is distorted; when detecting that the projected image is distorted, increasing the virtual focal length and re-enter the step of determining the projection point of each feature point in the feature point sequence on the projection plane of the virtual camera in combination with the virtual focal length and/or the virtual aperture, the kinematic model and the joint variable; When it is detected that the projected image is not distorted, the step of displaying the projected image on the display is entered.
  • the controller is configured to: obtain the position of each of the projection points in the reference coordinate system; obtain the edge area of the projection point that falls into the projection plane or the display for displaying the projection image Calculate the ratio of the number of the first projection points to the total number of the projection points, and when the ratio reaches a threshold, determine the projected image distortion.
  • the operating arm includes a camera arm with an image end instrument; the controller is further configured to: acquire camera parameters of the image end instrument of the camera arm, and calculate the image end instrument according to the camera parameters.
  • the camera parameters include focal length and aperture; determine the pose of the image end device under the reference coordinate system according to the joint variables of the camera arm and the kinematic model; according to the image end device in the reference coordinate system
  • the transformation relationship between the pose of the virtual camera and the pose of the virtual camera converts the visible area of the image end device into the visible area of the virtual camera; calculates the visible area of the virtual camera on the projection plane.
  • a boundary line is displayed, and the boundary line is displayed in the projected image displayed by the display.
  • the operation arm includes a camera arm with an image end instrument and a surgical arm with an operation end instrument; the controller is further configured to generate a projection of the operation arm at an orderly fitting and connecting each of the projection points
  • the controller is further configured to generate a projection of the operation arm at an orderly fitting and connecting each of the projection points
  • the feature point sequence further includes unmatched second feature points
  • the controller matches the first feature point associated with the plurality of feature point sequences according to the identified feature parts in the step of matching After that, it is configured to: obtain the unmatched second feature point; combine the contour information, joint variables and kinematic model of the feature part corresponding to the second feature point to generate an image model of the corresponding feature part;
  • the image model is converted into a supplementary image in the coordinate system of the instrument at the end of the image; the supplementary image is spliced into the sequence according to the order relationship between the second feature point and the first feature point in the feature point sequence.
  • the image of the feature part corresponding to the first feature point is used to form a complete sub-image of the manipulation arm in the manipulation image; and the manipulation image with the complete sub-image of the manipulation arm is displayed.
  • the controller is further configured to: obtain the maximum motion range of the operating arm in the first direction; calculate the operating arm in the first direction according to the joint variables of the operating arm and the kinematics model an upward movement amount; generating an icon according to the maximum movement range in the first direction and the movement amount; displaying the icon in the display.
  • the first direction is the forward and backward feeding direction.
  • the icon is a progress bar or a pie chart.
  • the controller is configured to correspondingly darken or lighten the color of the variable length bar when the movement amount increases or decreases.
  • the controller is configured to detect a currently controlled first operating arm from the operating arms, and to identify the first operating arm in the projected image.
  • the plurality of virtual cameras that can be selected by the input unit have different poses in the reference coordinate system.
  • the pose of the virtual camera in the reference coordinate system is determined based on the reachable workspace of the manipulator arm in the reference coordinate system.
  • the pose of the virtual camera under the reference coordinate system is determined based on the union space of the reachable workspace of the manipulator arm under the reference coordinate system.
  • the position of the virtual camera under the reference coordinate system is always located outside the union space, and the posture of the virtual camera under the reference coordinate system is always toward the union space.
  • the virtual camera has an optional virtual focal length
  • the position of the virtual camera is outside a first area
  • the first area is the area determined by the shortest virtual focal length that can just see the union space.
  • the virtual camera has an optional virtual focal length
  • the position of the virtual camera is located within a second area
  • the second area is the area determined by the longest virtual focal length that can just see the union space .
  • the posture of the virtual camera always faces the center of the union space.
  • the controller is configured to display the projected image in a first display window of the display, and to generate a plurality of selectable icons of the virtual camera in the first display window.
  • the relative position of the icon and the projected image is fixed, and changes as the viewpoint of the projected image changes.
  • the icons are set to six, corresponding to virtual imaging of the operating arm from the left side, the right side, the upper side, the lower side, the front side and the rear side respectively to generate the projection image under the corresponding viewpoint.
  • the icons are displayed as arrow patterns or camera patterns, and any one of the icons rotated and selected corresponds to one of the virtual cameras.
  • the icon is displayed as a rotatable sphere, and any position reached by the icon after being rotated corresponds to one of the virtual cameras.
  • the controller is configured to, in the step of acquiring the virtual camera selected by the input unit, execute: acquiring the virtual camera selected by the input unit and at least two targets of the virtual camera input by the input unit position; according to the preset motion speed of the virtual camera and according to the kinematics model and the joint variables, determine the target of the projection plane of each feature point in the feature point sequence under each target position of the virtual camera projection points; orderly fitting and connecting the target projection points under each target position to generate a target projection image of the operating arm; generating an animation according to each target projection image; The animation is played on the display.
  • the controller is configured to, in the step of acquiring the virtual camera selected by the input unit, execute: acquiring the motion trajectory of the virtual camera input by the input unit; discrete the motion trajectory to obtain each Discrete positions are used as target positions; according to the preset motion speed of the virtual camera and according to the kinematics model and the joint variables, each feature point in the feature point sequence is determined at each target of the virtual camera The target projection points of the projection plane under the position; orderly fitting and connecting the target projection points under each target position to generate the target projection image of the operating arm; generating animation according to each target projection image; Set a frequency to play the animation on the display.
  • the operation arm includes a camera arm with an image end instrument; the controller is configured to: acquire an operation image of the operation area captured by the image end instrument; display the operation image in the display; The projection image is displayed floating in the operation image.
  • the controller is configured to, in the step of displaying the projected image while floating in the operating image, execute: acquiring the overlapping area of the operating image and the projected image, and obtaining the location of the operating image in the operating image. and adjusting the second image attribute of the portion of the projection image in the overlapping area according to the first image attribute.
  • controller is configured to: when a first one of the manipulator arms reaches a threshold of an event, identify at least a portion of the first manipulator arm in the projected image and display it on the monitor.
  • the threshold is a warning threshold and the event is a situation to be avoided.
  • the warning threshold is based on a range of motion of at least one joint in the first operating arm, and the condition to be avoided is a limitation of the range of motion of at least one joint in the first operating arm.
  • warning threshold is based on the distance between the first operating arm and the second operating arm of the manipulator, and the condition to be avoided is the distance between the first operating arm and the second operating arm collision between.
  • the controller is configured to: obtain the minimum distance between the first operating arm and the second operating arm and determine the relationship between the minimum distance and the warning threshold; when the minimum distance When the warning threshold is reached and the threshold corresponding to the situation to be avoided is reached, a first marker is formed for identifying the minimum distance point on the sub-images of the first operating arm and the second operating arm.
  • controller is configured to: when the minimum distance reaches a condition to be avoided, form a second identifying point of minimum distance on the models of the first and second operating arms logo.
  • the controller is configured to, in the step of acquiring the minimum distance between the first operating arm and the second operating arm and judging the relationship between the minimum distance and the warning threshold, execute: According to the kinematic models and structural features of the first operating arm and the second operating arm, construct the corresponding geometric models of the first operating arm and the second operating arm; discretize the first operating arm and the The geometric model of the second manipulating arm obtains the set of external information points of the first manipulating arm and the second manipulating arm in the reference coordinate system; according to the external information points of the first manipulating arm and the second manipulating arm set to determine the minimum distance between the first operating arm and the second operating arm; and identifying the minimum distance point on the sub-images of the first operating arm and the second operating arm includes: determining The minimum distance points corresponding to the minimum distances are identified, and the minimum distance points on the models of the first operating arm and the second operating arm are identified.
  • the controller is configured to: when the minimum distance reaches the warning threshold, the position of the point in the reference coordinate system according to the minimum distance on the sub-images of the first operating arm and the second operating arm Determining a collision direction; identifying the collision direction between the first operating arm and the second operating arm.
  • the surgical robot includes a mechanical handle coupled with the controller and used to control the movement of the operating arm, the controller is configured to: generate obstacles to the mechanical handle in the associated direction according to the collision direction resistance to moving up.
  • the mechanical handle has a plurality of joint assemblies and a drive motor for driving each of the joint assemblies to move, each of the drive motors is coupled to the controller, and the controller is configured to: associate according to the resistance The drive motor in the direction produces a reverse torque.
  • controller is configured to have a negative correlation between the magnitude of the reverse torque and the magnitude of the minimum distance when the minimum distance is between the warning threshold and a threshold corresponding to the situation to be avoided .
  • the present application provides a graphical display method for a surgical robot
  • the surgical robot includes: an input part; a display; an operating arm, including a plurality of joints and sensors for sensing joint variables of the joints, a plurality of The joints constitute positioning degrees of freedom and/or orientation degrees of freedom, the manipulating arm has a feature point sequence composed of ordered feature points, and the feature points represent the joints;
  • the control method includes the steps of: obtaining the the feature point sequence of the operating arm and its corresponding kinematic model; obtain the joint variables sensed by the sensor, and obtain the virtual camera selected by the input part; determine the feature according to the kinematic model and the joint variable The projection points of each feature point in the point sequence on the projection plane of the virtual camera; orderly fitting and connecting the projection points to generate a projection image of the operating arm; displaying the projection image on the display.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program is configured to be loaded and executed by a processor to achieve the implementation of any of the foregoing embodiments.
  • the present application provides a graphical control device for a surgical robot, comprising: a memory for storing a computer program; and a processor for loading and executing the computer program; wherein the computer program is configured The steps of implementing the graphical display method according to any one of the above embodiments are loaded and executed by the processor.
  • the virtual camera By configuring the virtual camera to simulate the real camera to image the manipulator, it can realize the observation of all manipulators and each manipulator as a whole, which is helpful for the doctor to observe the motion state of the manipulator in an all-round way, which in turn contributes to the reliability of the operation. and continuity.
  • Fig. 1 is a partial schematic diagram of a prior art surgical robot under an operation state
  • FIG. 2 is a schematic structural diagram of an embodiment of a surgical robot of the present application.
  • FIG. 3 is a partial schematic view of an embodiment of the surgical robot shown in FIG. 1;
  • FIG. 4 is a flowchart of an embodiment of a method for graphically displaying a surgical robot
  • FIG. 5 is a schematic structural diagram of an operating arm and a power part in a surgical robot
  • FIG. 6 is a schematic diagram of a virtual camera layout of an embodiment of the surgical robot shown in FIG. 1;
  • FIG. 7 is a schematic diagram of a configuration interface of a virtual camera of an embodiment of the surgical robot shown in FIG. 1;
  • FIG. 8 is a schematic diagram of projection imaging according to an embodiment of the graphical display method shown in FIG. 4;
  • 9 to 10 are respectively schematic diagrams of display interfaces of an embodiment of a graphical display method
  • FIG. 11 is a flowchart of an embodiment of a method for graphically displaying a surgical robot
  • 12 to 13 are schematic diagrams of display interfaces of an embodiment of a graphical display method, respectively;
  • FIG. 14 is a flowchart of an embodiment of a method for graphically displaying a surgical robot
  • 15 is a schematic diagram of a display interface of an embodiment of a graphical display method
  • 16 to 17 are flowcharts of an embodiment of a method for graphically displaying a surgical robot
  • FIG. 18 is a schematic diagram of a display interface of an embodiment of a graphical display method
  • FIG. 19 is a flowchart of an embodiment of a method for graphically displaying a surgical robot
  • 20 to 21 are respectively schematic diagrams of configuration interfaces of the virtual camera of an embodiment of the surgical robot shown in FIG. 1;
  • 22 to 23 are flowcharts of an embodiment of a method for graphically displaying a surgical robot
  • FIG. 24 is a schematic diagram of a configuration interface of a virtual camera of an embodiment of the surgical robot shown in FIG. 1;
  • 25 to 27 are flowcharts of an embodiment of a method for graphically displaying a surgical robot
  • FIG. 28 is a schematic diagram of observing the operating arm with a large field of view
  • Figure 29 is a schematic diagram of a display interface generated using a large field of view as shown in Figure 28;
  • Figure 30 is a schematic diagram of a display interface generated after adjusting the large field of view as shown in Figure 28;
  • 31 is a flowchart of an embodiment of a method for graphically displaying a surgical robot
  • Fig. 32 is a partial schematic view of an embodiment of the surgical robot shown in Fig. 1;
  • 33 to 34 are respectively schematic diagrams of display interfaces of an embodiment of a graphical display method
  • 35 to 38 are flowcharts of an embodiment of a method for graphically displaying a surgical robot
  • 39 is a schematic structural diagram of another embodiment of the surgical robot of the present application.
  • FIG. 40 is a schematic structural diagram of a graphical control device for a surgical robot according to an embodiment of the application.
  • distal end and proximal end are used in this application as orientation words, which are common terms in the field of interventional medical devices, wherein “distal end” means the end away from the operator during the operation, and “proximal end” means The end closest to the operator during surgery.
  • first/second and the like refer to one component as well as a class of two or more components having common characteristics.
  • FIG. 2 to FIG. 3 they are a schematic structural diagram and a partial schematic diagram of an embodiment of the surgical robot of the present application, respectively.
  • the surgical robot includes a master console 2 and a slave operation device 3 controlled by the master console 2 .
  • the master console 2 has a motion input device 21 and a display 22, and the doctor sends a control command to the slave operation device 3 by operating the motion input device 21, so that the slave operation device 3 performs the corresponding operation according to the control command of the doctor operating the motion input device 21, and
  • the surgical field is viewed through the display 22 .
  • the slave operating device 3 has an arm mechanism, and the arm mechanism has a mechanical arm 30 and an operating arm 31 detachably installed at the distal end of the mechanical arm 30 .
  • the robotic arm 30 includes a base and a connecting assembly that are connected in sequence, and the connecting assembly has a plurality of joint assemblies.
  • the operating arm 31 includes a connecting rod 32, a connecting component 33 and an end device 34 connected in sequence, wherein the connecting component 33 has a plurality of joint components, and the posture of the end device 34 is adjusted by adjusting the joint components of the operating arm 31; the end device 34 has an image
  • the end device 34A and the operation end device 34B are used to capture an image within the field of view, and display 22 is used to display the image.
  • the manipulation end instrument 34B is used to perform surgical operations such as cutting, suturing.
  • the surgical robot shown in FIG. 1 is a single-hole surgical robot, and each operating arm 31 is inserted into the patient's body through the same trocar 4 installed at the distal end of the robotic arm 30 .
  • the doctor generally only controls the operating arm 31 to complete basic surgical operations.
  • the manipulating arm 31 of the single-hole surgical robot should have both a positional degree of freedom (that is, a positioning degree of freedom) and a posture degree of freedom (that is, the orientational degree of freedom), so as to realize the change of the posture and attitude within a certain range.
  • the manipulating arm 31 has The degree of freedom of horizontal movement x, the degree of freedom of vertical movement y, the rotation degree of freedom ⁇ , the pitch degree of freedom ⁇ and the yaw degree of freedom ⁇ , the operating arm 31 can also be driven by the distal joint of the mechanical arm 30, that is, the power mechanism 301. Movement degrees of freedom (ie, feed degrees of freedom) z.
  • the power mechanism 301 has a guide rail and a power part slidably arranged on the guide rail, and the operating arm is detachably installed on the power part.
  • the power part provides power for the joints of the operating arm 31 to realize the remaining 5 degrees of freedom (ie [x, y, ⁇ , ⁇ , ⁇ ]).
  • the surgical robot also includes a controller.
  • the controller can be integrated into the master console 2 or into the slave operation device 3 .
  • the controller can also be independent of the master operating console 2 and the slave operating device 3, which can be deployed locally, for example, or the controller can be deployed in the cloud.
  • the controller may be constituted by more than one processor.
  • the surgical robot further includes an input unit.
  • the input unit may be integrated into the main console 2 .
  • the input unit may also be integrated in the slave operating device 3 .
  • the input part can also be independent of the master console 2 and the slave operation device 3 .
  • the input unit may be, for example, a mouse, a keyboard, a voice input device, or a touch screen.
  • a touch screen is used as the input part, the touch screen is disposed on the armrest of the main console 2, and the information available for configuration can be displayed on the touch screen, such as the virtual camera to be selected and its virtual camera parameters.
  • the information available for configuration may be displayed on the display 22 of the main console 2 or other external displays.
  • the manipulator arm 31 also includes sensors that sense joint variables of the joint. These sensors include an angle sensor that senses the rotational motion of the joint assembly and a displacement sensor that senses the linear motion of the joint assembly, and an appropriate sensor can be configured according to the type of the joint.
  • the controller is coupled to these sensors and to the input section and the display 22 of the main console 2 .
  • a graphical display method of a surgical robot is provided, and the graphical display method can be executed by a controller.
  • the graphical display method includes the following steps:
  • step S11 the feature point sequence of the operating arm and the kinematic model corresponding to the operating arm are obtained.
  • a storage unit 311 is installed on the abutment surface of the driving box 310 of the operating arm 31 abutting against the power part 302 of the power mechanism 301 , and correspondingly, when the power part 302 abuts the driving box 310 , a storage unit 311 is installed.
  • the abutting surface is provided with a reading unit 303 matched with the storage unit 311 .
  • the reading unit 303 is coupled to the controller.
  • the reading unit 303 communicates with the coupled storage unit 311 , the reading unit 303 reads the relevant information from the storage unit 311 .
  • the storage unit 311 is, for example, a memory or an electronic tag.
  • the storage unit stores, for example, one or a combination of two or more of the type of the operating arm, the sequence of feature points, and the kinematic model constructed in advance according to the link parameters of the operating arm.
  • the feature point sequence includes a plurality of feature points, the feature points can represent any feature part in the manipulator arm, and the feature part can refer to one or more of the end devices, joints, and connecting rods of the manipulator arm.
  • the storage unit 311 stores the feature point sequence and kinematic model of the operating arm, and the required feature point sequence and kinematic model of the operating arm can be obtained directly from the storage unit 311 .
  • the storage unit 311 only stores the type of the manipulator, and other storage units coupled to the controller store the feature point sequences and kinematic models of different types of manipulators.
  • the feature point sequence and kinematic model of the corresponding manipulator can be obtained according to the acquired type of manipulator.
  • Step S12 acquiring joint variables of each joint in the operating arm sensed by the sensor.
  • the joint variable refers to the joint amount of the rotation joint and/or the joint offset of the mobile joint in the joint.
  • Step S13 acquiring the virtual camera selected by the input unit.
  • a virtual camera is a non-existent camera, which does not actually capture the image of an object, and only reflects the concept of a viewpoint, as shown in Figure 6, which shows a virtual camera relative to the manipulator arm.
  • the default virtual camera 100 can be defined as any one of them, for example, the default selection is the virtual camera 100 on the puncture device 4 .
  • the parameters of the virtual camera can be configured, and the virtual camera parameters (ie configuration parameters) of the virtual camera include at least the (virtual) pose, corresponding to the camera parameters of the real camera such as focal length and/or aperture, and the virtual camera parameters also include virtual focal length and/or or virtual aperture.
  • the (virtual) focal length corresponds to the field of view of the adjustable (virtual) camera
  • the (virtual) aperture corresponds to the depth of field of the adjustable (virtual) camera.
  • the virtual camera parameters can also be described as including the field of view and/or the depth of field.
  • the field of view and/or the depth of field are also virtual.
  • imaging principles like real cameras can also be utilized to achieve the subject matter of the present application. Different virtual camera parameters can show different imaging effects to doctors.
  • These virtual camera parameters can be solidified in a system configuration file stored in the memory of the surgical robot, and can be obtained by reading the system configuration file through the controller.
  • These virtual camera parameters can also be manually set by the doctor through an input unit coupled to the controller before or during the operation. This setting method is on demand. For example, these virtual camera parameters can be set through the text
  • the control input related data is obtained, for example, these virtual camera parameters can be obtained by selecting from the option control.
  • the pose of the virtual camera may be the same as that of the real camera (ie, the image end instrument) to observe the manipulator from the same viewpoint as the real camera.
  • the pose of the virtual camera can also be different from the pose of the real camera, so that the manipulator can be observed from a different viewpoint than the real camera.
  • the pose of the virtual camera can be selected to be different from the pose of the real camera for observation, which helps to obtain more comprehensive information of the manipulator arm.
  • the manipulator arm can also be a camera arm at this time, so that the virtual camera can observe it.
  • the acquired virtual camera includes acquiring the pose of the virtual camera and virtual camera parameters of the virtual camera.
  • the longest virtual focal length can be infinite, and the shortest can be infinitely close to 0.
  • the virtual focal length that the virtual camera can select can be configured by imitating the lens of a real camera with a focal length range of 2 mm to 70 mm, for example, it can be configured as a virtual focal length of 2 mm to 50 mm, such as 2 mm, 5 mm, 10 mm, 20 mm.
  • the position of the virtual camera is configured according to the shortest virtual focal length and/or the longest virtual focal length. Among them, the smaller the virtual focal length, the larger the projected image, and the more local details can be viewed; the larger the virtual focal length, the smaller the projected image, and the better the global view.
  • a lens can be modeled after a real camera with eg aperture ranges F1, F1.2, F1.4, F2, F2.8, F4, F5.6, F8, F11, F16, F22, F32, F44, F64 to configure the virtual aperture that the virtual camera can choose from.
  • the virtual aperture can be configured as F2.8, F4, F5.6, F8. Wherein, the larger the virtual aperture, the smaller the depth of field; wherein, the smaller the virtual aperture, the larger the depth of field.
  • a configuration interface of a virtual camera is illustrated.
  • the pose, virtual focal length and virtual aperture of the virtual camera can be selected on this interface.
  • Step S14 according to the kinematic model of the operating arm and the joint variables, determine the projection points of each feature point in the feature point sequence of the operating arm on the projection plane of the virtual camera.
  • the first position of each feature point in the feature point sequence under the reference coordinate system can be calculated according to the kinematic model of the manipulator arm and the joint variables, and then the first position of each feature point in the feature point sequence can be calculated according to the coordinate transformation relationship between the virtual camera coordinate system and the reference coordinate system.
  • a position is converted into a second position in the virtual camera coordinate system, and finally the second position is projected into a third position on the projection plane of the virtual camera as a projection point.
  • the projection plane of the virtual camera is usually related to the virtual focal length of the virtual camera, so the projection plane of the virtual camera can usually be determined according to the virtual focal length of the virtual camera, which is equivalent to obtaining the projection plane of each second position on the projection plane according to the virtual focal length. Projection point.
  • the above-mentioned reference coordinate system can be set anywhere, and it is generally considered to be set on the surgical robot, and preferably, set on the slave operating device.
  • the reference coordinate system is the base coordinate system of the slave operating device.
  • the reference coordinate system is the tool coordinate system of the robotic arm from the operating device.
  • the realization process of obtaining the projection point of each second position on the projection plane according to the virtual focal length can be divided into the following two steps: the first step is to obtain the outline information of the joint represented (associated) by each feature point, the outline information. Examples include size information and/or line type information, etc.; and in the second step, the projection point of each second position on the projection plane is obtained in combination with the virtual focal length and contour information.
  • the above-mentioned first position, second position and third position may be a point position or an area composed of multiple points.
  • a projection point can be understood as a point or a point set. It is determined by the selection of feature points, that is, when the feature point itself selects a point in the feature part, the projection point is a point; and the feature point itself selects a point set of the feature part (the concept of "region")
  • the projected point corresponds to a point set (ie region). If the point set of the feature points can reflect the geometric size of the feature part, then the projected point can also reflect the geometric size of the feature part, and then the real structure of the manipulator can be approximately displayed, which is more conducive to displaying the motion state of the manipulator.
  • the more or denser the number of selected feature points the more approximate the real structure of the manipulator can be displayed.
  • the linear features of the manipulator arm such as straight lines, such as curves, such as radians of curves, can be more accurately reflected.
  • the projection of each feature point in the feature point sequence on the virtual camera may be determined in combination with the virtual focal length (virtual field of view) and/or virtual aperture (depth of field), kinematic model and joint variables of the virtual camera Projection point of the plane.
  • FIG. 8 illustrates a projection principle.
  • the operating arm has a sequence of feature points, which includes feature points Q1, Q2, Q3 and Q4.
  • a sequence of projection points is obtained on the projection plane, and the sequence of projection points corresponds to q1, q2, and q3 and q4.
  • the positions of Q1 and Q2 in space are obtained according to the kinematic model and joint variables as Q1 (X1, Y1, Z1) and Q2 (X2, Y2, Z2 respectively) ).
  • the projection points q1(x1, y1) and q2(x2, y2) of the feature point Q1 and the feature point Q2 on the projection plane are determined in combination with the virtual focal length and can be obtained by the following formula:
  • x2 fx*(X12/Z12)+cx;
  • fx is the focal length in the horizontal direction
  • fy is the focal length in the vertical direction
  • cx is the offset relative to the optical axis in the horizontal direction
  • cy is the offset relative to the optical axis in the vertical direction.
  • the values of fx and fy can be equal or unequal.
  • Step S15 orderly fitting and connecting each projection point to generate a projection image of the operating arm.
  • the projection points can be connected in an orderly manner according to the order of the feature points corresponding to the projection points in the feature point sequence to generate the projection image of the manipulator. It is not about which projection point is connected first and then which projection point is connected. According to the corresponding order between the projection points, the projection points are connected in sequence from the proximal end corresponding to the actual structure of the manipulator to the distal end, or from the distal end to the proximal end, or from the middle to the distal end. It is possible to connect both ends in sequence.
  • each projection point can be fitted and connected in an orderly manner in combination with the contour information of the feature part to generate a projection image of the manipulator arm.
  • each projection point can be connected by a line segment with the same size as the projection point.
  • fitting connection can refer to the connection method of linear features close to the feature site, for example, for an operating arm that is generally linear, connecting adjacent projection points with straight line segments, or, for example, for at least partially curved manipulators
  • For the manipulator arm use a curved segment to connect the projection points corresponding to the curved part.
  • the way of fitting the connection can reflect the linear characteristics of the manipulator.
  • Step S16 displaying the projected image on the display.
  • the doctor can observe the motion state of all operating arms and the complete characteristic parts of each operating arm through the projection image, and there is no longer a blind spot, which helps to assist the doctor to perform operations reliably and continuously.
  • the operating arms 31b and 31c may collide with each other outside the real visible area of the real camera and cannot be observed. the potential collision.
  • Figure 9 illustrates a display interface that generates only a projected image of the surgical arm.
  • FIG. 10 illustrates another display interface that simultaneously generates projection images of the surgical arm and the camera arm.
  • the projection images in FIG. 9 and FIG. 10 both reflect the motion state of each feature point corresponding to the operating arm.
  • the controller can be configured to perform in the above step S15, that is, in the step of orderly fitting and connecting each projection point to generate the projection image of the operating arm:
  • Step S151 acquiring the icon of the end device of the operating arm.
  • the type of the manipulator can be obtained first, and then the icon of the end device of the manipulator can be matched according to the type of the manipulator.
  • the icon of the end device of the operating arm can be matched according to the acquired feature point sequence.
  • Step S152 Determine the pose of the end device in the virtual camera coordinate system according to the joint variables and the kinematics model.
  • Step S153 rotate and/or scale the icon according to the pose of the end device in the virtual camera coordinate system.
  • the icon is usually scaled according to the position of the terminal device in the virtual camera coordinate system, and the icon is rotated according to the posture (direction) of the terminal device in the virtual camera coordinate system.
  • Step S154 splicing the processed icon on the projection point at the far end to generate a projection image.
  • FIG. 12 illustrates a display interface
  • the display interface shows the shape of the end instrument of the corresponding operating arm in the projected image, of course, the projected image does not reflect the contour shape of the corresponding operating arm.
  • FIG. 13 illustrates another display interface, which also displays the shape of the end device of the corresponding operating arm in the projected image, of course, the projected image reflects the contour shape of the corresponding operating arm.
  • the manipulation arm includes a camera arm with an image end instrument and/or a surgical arm with an operation end instrument.
  • the controller is also configured to perform the following steps:
  • Step S21 detecting whether there is a camera arm in the operating arm.
  • This step S21 can be triggered by the user through the input part.
  • the detection step can also be implemented by, for example, acquiring the type of the operating arm, and then judging whether the operating arm includes a camera arm according to the type of the operating arm. Of course, all surgery must have a camera arm.
  • step S22 When it is detected in this step that the camera arm is included in the operation arm, the process proceeds to step S22.
  • step S22 the camera parameters of the image end device of the camera arm are acquired, and the visible area of the image end device is calculated according to the camera parameters.
  • the camera parameters of the image end device include focal length and aperture.
  • Step S23 Determine the pose of the image end device in the reference coordinate system according to the joint variables of the camera arm and the kinematics model.
  • Step S24 according to the transformation relationship between the pose of the image end device and the pose of the virtual camera in the reference coordinate system, the visible area of the image end device is converted into the visible area of the virtual camera.
  • Step S25 Calculate the boundary line of the visible area of the virtual camera on the projection plane, and display the boundary line in the projection image displayed on the display.
  • Fig. 15 illustrates a display interface
  • the projection image of the display interface shows the visible area of the image end device, and the part outside the visible area is the non-visible area.
  • the controller is further configured to perform the following steps in the above step S15, that is, in the step of orderly fitting and connecting each projection point to generate the projection image of the operating arm:
  • Step S151' acquiring the operation image of the operation area captured by the image end instrument of the camera arm.
  • Step S152' identifying the characteristic part of the operating arm from the operation image.
  • Image recognition can be used. More preferably, image recognition can be performed in combination with a neural network such as a convolutional neural network.
  • Step S153' matching the first associated feature points from the feature point sequence according to the identified feature parts.
  • first feature point refers to a type of feature point, and herein it refers to all feature points matched according to the identified feature parts, which may be one or more than two.
  • second feature point refers to another type of feature point, and herein it refers to all remaining feature points in the feature point sequence except the first feature point, which may also be one or more than two.
  • Step S154' orderly fitting and connecting the projection points and marking the first projection point associated with the first feature point in the projection points and the line segment connected with the first projection point to generate the operation Projection image of the arm.
  • the controller may also be configured to perform the following steps in step S153', that is, after the step of matching the first associated feature point from the feature point sequence according to the identified feature parts:
  • Step S155' acquiring the unmatched second feature points.
  • the second feature point can be obtained by excluding the first feature point from the feature point sequence.
  • Step S156' generating an image model of the corresponding feature portion in combination with the contour information, joint variables and kinematic model of the feature portion corresponding to the second feature point.
  • This image model can be a reconstructed computer model or a computationally obtained projection model.
  • Step S157' transform the image model into a supplementary image in the coordinate system of the instrument at the end of the image.
  • Step S158' according to the sequence relationship between the second feature point and the first feature point in the feature point sequence, the supplementary image is spliced to the image of the feature part corresponding to the first feature point to form a complete sub-image of the operating arm in the operating image.
  • step S159' the operation image with the complete sub-image of the operation arm is displayed on the display.
  • FIG. 18 illustrates a display interface supplemented with an operating arm whose operating image is incomplete.
  • the doctor can also be assisted in viewing some characteristic parts of the operating arm that cannot be seen by the real camera.
  • the controller may also be configured to perform the following steps:
  • Step S31 obtaining the maximum movement range of the operating arm in the first direction.
  • Step S32 calculating the movement amount of the operating arm in the first direction according to the joint variables of the operating arm and the kinematics model.
  • Step S33 generating an icon according to the maximum motion range and motion amount in the first direction.
  • the maximum motion range may be pre-stored in the aforementioned storage unit.
  • step S34 an icon is displayed on the display.
  • Such a graphical display can continue to refer to FIG. 9 , FIG. 12 and FIG. 13 .
  • the first direction can be one or more of the forward and backward feed direction, the left and right movement direction, the up and down movement direction, the autorotation direction, the pitch direction, and the yaw direction, which can be configured according to the effective degrees of freedom of the operating arm.
  • the first direction is the forward and backward feeding direction.
  • the icon can be a progress bar or a pie chart.
  • the maximum range of motion is a fixed-length bar
  • the amount of motion is a variable-length bar within the length of the fixed-length bar.
  • the color of the variable length bar can be darkened or lightened accordingly.
  • the proportional value of the exercise amount in the maximum motion range can also be calculated separately or in combination and displayed in the display area of the progress bar, for example, displayed in a variable-length bar of the exercise amount.
  • the controller may be further configured to detect the currently controlled first operating arm from the operating arms, thereby identifying the first operating arm in the projected image. In this way, the controlled and uncontrolled operating arms can be displayed differentially in the display. Wherein, whether the operating arm is controlled can be determined according to whether a start command for actively controlling the operating arm is detected.
  • different virtual cameras that can be selected by the input unit have different poses in the reference coordinate system, so as to simulate a real camera such as an image end device to observe the manipulator from different positions and/or poses (directions).
  • the pose of the virtual camera in the reference coordinate system may be determined based on the reachable workspace (referred to as the reachable space) of the manipulator arm in the reference coordinate system. This allows the pose of the virtual camera to be associated with its reachable workspace for easy determination.
  • the pose of the virtual camera in the reference coordinate system can be determined based on the union space of the reachable workspace of the manipulator arm in the reference coordinate system.
  • this union space is equal to the reachable working space of the manipulator.
  • this union space is the space corresponding to the union of the reachable workspaces of the operating arms.
  • the reachable workspace of each operating arm in the reference coordinate system can be determined according to the kinematic model of the operating arm, and stored in the aforementioned storage unit for direct recall.
  • the reachable workspace of each operating arm in the reference coordinate system can also be recalculated one or more times each time the surgical robot is activated according to the kinematic model of the operating arm.
  • the position of the virtual camera in the reference coordinate system is always located outside the union space, and the posture of the virtual camera in the reference coordinate system is always oriented toward the union space.
  • the pose of the virtual camera determined in this way can always fully observe the motion state of each operating arm, including observing the motion state of each operating arm and observing the motion state between the operating arms.
  • the virtual camera is configured with selectable virtual focal lengths.
  • the position of the virtual camera only needs to be located outside the area determined by the shortest virtual focal length where the entire union space can be seen.
  • the position of the virtual camera is also feasible as long as the position of the virtual camera is within the area determined by the minimum virtual focal length that can see the entire union space.
  • the position of the virtual camera may be jointly defined by the longest focal length and the shortest focal length available for configuration, and it is located at the intersection between the first area determined by the longest focal length and the second area determined by the shortest focal length area.
  • the pose (direction) of the virtual camera is always towards a relatively certain point or area in the union space. In one embodiment, the pose of the virtual camera is always towards the center of the union space. In this way, it can be ensured that the virtual imaging plane of the virtual camera can always perform virtual imaging of each operating arm.
  • the controller may be configured to display the projected image in a first display window of the display, and to generate an icon of a selectable virtual camera in the first display window.
  • the relative position of the icon corresponding to the virtual camera and the projected image may be fixed and synchronously transformed with the transformation of the viewpoint of the projected image.
  • the transformation of the projected image viewpoint ie, coordinates
  • the transformation of the projected image viewpoint is related to the position of the selected virtual camera.
  • the icons corresponding to the virtual cameras may be set to six, that is, representing virtual cameras in six different positions, and the six icons, for example, correspond to the left side, the right side, the upper side, the lower side, the front side and the rear side, respectively.
  • the icons are displayed as arrow patterns or camera patterns, and any one of the icons rotated and selected corresponds to a virtual camera.
  • the icon can also be, for example, a dot, a circle, or the like.
  • the icon is displayed as an arrow pattern, and the arrow shows the adjustment direction of the viewing angle.
  • the icon is displayed as a rotatable sphere, and any position reached by the sphere that is rotated corresponds to a virtual camera.
  • any position on the surface of the sphere may correspond to some positions of the aforementioned first area, the second area and/or the intersection area of the first area and the second area, so any position where the sphere is rotated will be Can represent a virtual camera.
  • the postures of these virtual cameras are all directed to a certain point in the reachable space, so as to ensure that each complete operating arm can be seen.
  • the icon is displayed as a sphere, and the arrow shows the adjustable direction of the field of view.
  • the controller is configured to, in the above-mentioned step S13, that is, the step of acquiring the virtual camera selected by the input unit, execute:
  • Step S131 acquiring the virtual camera selected by the input unit and at least two target positions of the virtual camera input by the input unit.
  • the selected virtual camera mainly refers to the virtual focal length and/or virtual aperture of the selected virtual camera; the input at least two target positions of the virtual camera may be more than two discrete positions, or two continuous positions. more than one location.
  • the tracking mode When inputting the target position, you can set the tracking mode at the same time, such as single tracking projection mode, multiple tracking projection mode and reciprocating tracking projection mode.
  • Some columns of target positions include the start position A and the end position B.
  • For the single-tracking projection mode only one projection of each target position in A to B is performed; for the multiple-tracking projection mode, The projection of each target position in A to B is performed a specified number of times; for the reciprocating tracking projection mode, the projection of each target position in A to B to A to B . . . is repeated.
  • the virtual camera After the entire projection process from A to B is completed, the virtual camera can stay at a specified position to continuously project, and the specified position can be any of A to B.
  • a position, such as A or B can also be other default positions.
  • Step S132 according to the preset motion speed of the virtual camera and according to the kinematic model and joint variables, determine the target projection point of each feature point in the feature point sequence on the projection plane under each target position of the virtual camera.
  • Step S133 orderly fitting and connecting each target projection point under each target position to generate a target projection image of the operating arm.
  • Step S134 generating animation according to each target projection image.
  • Step S135 playing the animation on the display according to the preset frequency.
  • the doctor can dynamically observe the mutual positional relationship and projection information of the operating arms, solve the situation of partial information overlap or projection distortion under a single viewing angle, and understand the spatial position information from multiple directions.
  • the controller is configured to execute in the above step S13, that is, the step of acquiring the virtual camera selected by the input unit:
  • Figure 23
  • Step S1311' acquiring the motion track of the virtual camera input by the input unit.
  • the motion track may be a track of cursor movement, and for example, the motion track may be a sliding track of a finger.
  • the starting position of the motion track is the position of the virtual camera corresponding to one of the aforementioned icons, and the starting position has coordinates (x0, y0, z0), and in the motion track, other positions are The coordinates of the Z-axis remain unchanged, but only the X-axis and Y-axis coordinates are changed.
  • the starting position of the motion trajectory is not necessarily the position of the virtual camera corresponding to one of the aforementioned icons, but it is usually necessary to first specify the Z-axis coordinates of the entire trajectory, and then only change the X-axis and Y-axis coordinates. That's it.
  • FIG. 24 it illustrates a configuration interface of a virtual camera motion trajectory.
  • Step S1312' discrete motion trajectories to obtain discrete positions of the virtual camera as target positions.
  • Step S132 according to the preset motion speed of the virtual camera and according to the kinematic model and joint variables, determine the target projection point of each feature point in the feature point sequence on the projection plane under each target position of the virtual camera.
  • Step S133 orderly fitting and connecting each target projection point under each target position to generate a target projection image of the operating arm.
  • Step S134 generating animation according to each target projection image.
  • Step S135 playing the animation on the display according to the preset frequency.
  • controller is also generally configured to perform the following steps:
  • Step S41 acquiring the operation image of the operation area collected by the image end instrument.
  • Step S42 displaying the operation image on the display.
  • Step S43 the projection image is displayed floating in the operation image.
  • the position of the projected image in the operating image can be changed relatively easily.
  • a floating window is generated in the display, the floating window displays the projected image, and the remaining area of the display displays the operation image. This helps to allow the projected image to avoid some key positions in the operation image as needed to facilitate the operation.
  • the controller may also be configured to, in step S43, that is, in the step of displaying the projected image suspended in the operation image, execute:
  • Step S431 acquiring the overlapping area of the operation image and the projection image, and acquiring the first image attribute of the part of the operation image in the overlapping area.
  • Step S432 adjusting the second image attribute of the portion of the projected image in the overlapping area according to the first image attribute.
  • image properties include one of color, saturation, hue, brightness, and contrast, or a combination of two or more. For example, one or more combinations of color, brightness, and contrast.
  • the image attributes of the projected image can be adjusted adaptively according to the image attributes of the operation image. For example, when the operating image is dark, the projected image can be brightened, or the color of the projected image can be changed to make the projected image more prominent relative to the operating image for easy observation by the doctor.
  • the controller may also be configured to, in step S16, before the step of displaying the projected image on the display, execute:
  • Step S161 detecting whether the projected image is distorted.
  • step S162 When it is detected that the projected image is distorted, go to step S162; and when it is detected that the projected image is not distorted, go to step S16.
  • Step 1 obtain the position of each projection point in the reference coordinate system;
  • Step 2 obtain the number of the first projection points that fall within the edge area in the projection point;
  • Step 3 The ratio of the number of the first projection points to the total number of projection points is calculated, and when the ratio reaches a threshold, it is determined that the projection image is distorted.
  • the edge region can be divided, for example, based on a display window or a projection plane in which the projected image is displayed.
  • Step S162 increasing the virtual focal length of the virtual camera.
  • the angle of view is reduced.
  • Step S14' the step of determining the projection point of each feature point in the feature point sequence on the projection plane of the virtual camera in combination with the virtual focal length and/or virtual aperture, kinematic model and joint variables.
  • Figure 28 shows a schematic diagram of observing the operating arm with a large field of view
  • Figure 29 shows the display interface with the first projected image generated under the field of view shown in Figure 28. It can be seen that The edge area of the projected image is distorted if there is a compression problem
  • Figure 30 illustrates the regenerated display interface with the second projected image after the FOV is adjusted. It can be seen that the edge area of the projected image is expanded, which eliminates the distortion problem.
  • step S162 may be exemplarily increasing the virtual focal length of the virtual camera by a proportional factor.
  • the virtual focal length can also be re-determined according to the following formula (2).
  • fx0 is the focal length at the center of the projection plane
  • Fx is the distance from the center of a projection point on the projection screen along the X-axis direction
  • k1 is the setting coefficient
  • fx is the focal length in the x-direction of a certain projection point. In order to increase the virtual focal length, it is only necessary to satisfy k1*Fx>1.
  • the formula (2) associates the virtual focal length of the virtual camera with the position of the projection point, that is, the virtual focal length is related to the position of the projection point, and the virtual focal length to be adjusted changes with the change of the projection point.
  • x represents any point in the projection plane
  • the position of the projection point P in the projection plane is represented as P(Fx, Fy).
  • the controller may also be configured to perform:
  • the operation instruction for displaying or hiding the image of the corresponding operating arm is obtained, and then the image of the corresponding operating arm is displayed or hidden according to the operating instruction.
  • the corresponding projection point corresponding to the operating arm is determined in step S14.
  • the operation instruction for the hidden image of the operating arm is obtained, it is correspondingly unnecessary to determine the projection point corresponding to the operating arm in step S14.
  • This is equivalent to a customizable configuration of the projected image, so as to simplify the projected image and remove interfering sub-images.
  • a similar purpose can be achieved at least in part by adjusting the virtual aperture (virtual depth of field) of the virtual camera.
  • the operating arm far away from the virtual camera can be blurred by adjusting the virtual aperture, so that only clear Virtual imaging of the manipulator arm adjacent to the virtual camera.
  • the above-mentioned graphical display method may further include:
  • the first one of the manipulators reaches the threshold of the event, at least a portion of the first manipulator is identified in the projected image and displayed on the display.
  • the first operating arm also refers to a type of but not limited to a specific one operating arm.
  • the threshold is a warning threshold and the event is a condition to avoid.
  • the warning threshold is based on the distance between the first operating arm and the second one of the operating arms, for example, the warning threshold may be a numerical value.
  • the situation to be avoided is a collision between the first operating arm and the second operating arm, for example, the situation to be avoided may be a numerical value.
  • the second operating arm also refers to a type of, but not limited to, a specific one of the operating arms. For example, as shown in Figure 31, the method can be implemented by the following steps:
  • Step S51 obtaining the minimum distance between the first operating arm and the second operating arm.
  • This step S51 is performed in real time.
  • Step S52 determine the relationship between the minimum distance, the warning threshold and the situation to be avoided.
  • the warning threshold and the situation to be avoided are represented by numerical values, and when the situation to be avoided is a collision between the first operating arm and the second operating arm, the numerical value d lim represented by the warning threshold is greater than that of the situation to be avoided.
  • Step S53 performing a first identification on the minimum distance point on the projected images of the first operating arm and the second operating arm.
  • the operation arm includes the camera arm 31a, the operation arm 31b and the operation arm 31c, and the minimum distance between the operation arm 31b and the operation arm 31c reaches the warning threshold.
  • you can The minimum distance points P1 and P2 in the projected images of the surgical arm 31b (ie, the first operating arm) and the surgical arm 31c (ie, the second operating arm) are marked with colors or graphic frames such as circles, as shown in FIG. 33 .
  • the identification of the minimum distance point on the projected images of the first operating arm and the second operating arm is eliminated.
  • step S54 is entered, that is, the second identification is performed.
  • the first identification may be changed as the minimum distance gradually decreases or increases.
  • Step S54 performing a second identification on the minimum distance point on the projected images of the first operating arm and the second operating arm.
  • the first identification is different from the second identification.
  • the identification of the minimum distance points P1 and P2 in the models of the first operating arm and the second operating arm may be enhanced, such as deepening the color; or, the projections of the first operating arm and the second operating arm may be enhanced.
  • the identification of the minimum distance point in the image flashes; alternatively, a type change can be made to the identification of the minimum distance point in the projected images of the first operating arm and the second operating arm, such as changing the type of the graphic frame, as shown in Figure 34 , the solid line circles shown in FIG. 33 are replaced by dashed line circles in FIG. 34 .
  • Steps S51 to S54 help the doctor to grasp the collision position between the operating arms.
  • step S51 can be implemented by the following steps:
  • Step S511 constructing respective geometric models of the corresponding first and second manipulating arms according to the respective kinematic models and structural features of the first manipulating arm and the second manipulating arm.
  • the slightly larger basic geometry can usually be used to perform the interference analysis instead of the actual model, so as to improve the subsequent detection efficiency.
  • the respective geometric models of the first operating arm and the second operating arm can be simplified into, for example, a sphere, a cylinder, a cuboid, a convex polyhedron, or a combination of two or more.
  • Step S512 discrete the respective geometric models of the first manipulating arm and the second manipulating arm to obtain the respective external information point sets of the first manipulating arm and the second manipulating arm in the reference coordinate system.
  • step S512 the respective geometric models of the first manipulating arm and the second manipulating arm are digitized to obtain their respective external information point sets.
  • Step S513 Determine the minimum distance between the first operating arm and the second operating arm according to the respective external information point sets of the first operating arm and the second operating arm.
  • the distance tracking method can be used to determine the minimum distance between the two. More specifically, the distance between the two can be determined from the set of external information points of the first operating arm and the second operating arm through a traversal algorithm. minimum distance.
  • step S53 can be implemented by the following steps:
  • Step S531 determining the minimum distance point on the projected image of the first operating arm and the second operating arm corresponding to the minimum distance between the first operating arm and the second operating arm.
  • Step S532 performing a first identification on the minimum distance point on the projected images of the first operating arm and the second operating arm.
  • the graphical display method may further include the following steps:
  • Step S533 determining the collision direction according to the position of the minimum distance point on the projected images of the first operating arm and the second operating arm in the reference coordinate system.
  • Step S534 marking the collision direction between the first operating arm and the second operating arm in the projected image.
  • the above-mentioned identification of the minimum distance point and the collision direction between the first operating arm and the second operating arm in the projection image can be used to identify the collision direction, which can provide visual feedback for the doctor to avoid collision.
  • the handle of the main console adopts a mechanical handle.
  • the handle of the main console adopts a mechanical handle.
  • the minimum distance reaches the warning threshold and does not reach the situation to be avoided it includes:
  • Step S533 determining the collision direction according to the position of the minimum distance point on the projected images of the first operating arm and the second operating arm in the reference coordinate system.
  • Step S535 generating a resistance that prevents the mechanical handle from moving in the associated direction according to the collision direction.
  • This provides force feedback to the physician to avoid collisions when there is a tendency to collide between the manipulator arms.
  • the mechanical handle has a plurality of joint components, a sensor coupled with the controller for sensing the state of each joint component, and a driving motor coupled with the controller for driving each joint component to move. More specifically, generating a resistance against the movement of the mechanical handle in the associated direction according to the collision direction is more specifically: causing the driving motor in the associated direction to generate a reverse torque according to the resistance.
  • the reverse torque may be of constant magnitude; for another example, the magnitude of the reverse torque is negatively correlated with the magnitude of the minimum distance.
  • the magnitude of the reverse torque is negatively correlated with the size of the minimum distance, specifically, when the minimum distance gradually decreases, the reverse torque is increased to generate greater resistance; and when the minimum distance gradually increases, the reverse torque decreases.
  • the change of the reverse torque is linear; for example, the change of the reverse torque is non-linear such as stepwise.
  • the force sensor provided by each joint component of the mechanical arm handle can detect the doctor The applied force or moment, which in turn produces an opposing moment based on the doctor-applied force or moment that at least counteracts the doctor-applied force.
  • a force that is large enough to make a doctor with ordinary strength not enough to move the mechanical handle in the collision direction can also be generated suddenly.
  • the warning threshold may also be based on the range of motion of at least one joint assembly in the first operating arm, and the situation to be avoided is the limitation of the range of motion of at least one joint assembly in the first operating arm.
  • the first operating arm reaches the warning threshold, at least the related joint components of the model of the first operating arm may be identified in the first display window or the second display window.
  • resistance to movement of the first operating arm over the warning threshold towards the situation to be avoided may also be created at the mechanical handle. This resistance is also achieved by the counter torque generated by the associated drive motor.
  • the surgical robot of the above-described embodiment may also be a porous surgical robot.
  • the difference between the multi-hole surgical robot and the single-hole surgical robot is mainly in the operating equipment.
  • Figure 39 illustrates a slave manipulation device of a multi-porous surgical robot.
  • the robotic arm of the slave operating device in the multi-hole surgical robot has a main arm 110 , an adjustment arm 120 and a manipulator 130 which are connected in sequence. There are more than two adjustment arms 120 and manipulators 130 , such as four.
  • the distal end of the main arm 110 has an orientation platform, the proximal ends of the adjustment arms 120 are connected to the orientation platform, and the proximal end of the manipulator 130 is connected to the distal end of the adjustment arms 120 .
  • the manipulator 130 is used for detachably connecting the operating arm 150, and the manipulator 130 has a plurality of joint components.
  • different operating arms 150 are inserted into the patient through different trocars.
  • the operating arm 150 of the multi-hole surgical robot generally has less degrees of freedom.
  • the manipulating arm 150 only has the freedom of attitude (ie, the degree of freedom of orientation), of course, the change of its attitude generally also affects the position, but because the effect is small, it can usually be ignored.
  • the position of the manipulator 150 is often assisted by the manipulator 130. Since the manipulator 130 and the manipulator 150 are linked to realize the pose change, they can be regarded as manipulator components, which are equivalent to the manipulator 31 in the single-hole surgical robot.
  • the graphical control apparatus may include: a processor (processor) 501 , a communication interface (Communications Interface) 502 , a memory (memory) 503 , and a communication bus 504 .
  • the processor 501 , the communication interface 502 , and the memory 503 communicate with each other through the communication bus 504 .
  • the communication interface 502 is used to communicate with network elements of other devices such as various types of sensors or motors or solenoid valves or other clients or servers.
  • the processor 501 is configured to execute the program 505, and specifically may execute the relevant steps in the foregoing method embodiments.
  • the program 505 may include program code including computer operation instructions.
  • the processor 505 may be a central processing unit (CPU), or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement the embodiments of the present application, or a graphics processing unit (GPU) (Graphics Processing Unit). ).
  • CPU central processing unit
  • ASIC Application Specific Integrated Circuit
  • GPU graphics processing unit
  • One or more processors included in the control device may be the same type of processors, such as one or more CPUs, or one or more GPUs; or may be different types of processors, such as one or more CPUs and one or more GPUs.
  • the memory 503 is used to store the program 505 .
  • the memory 503 may include high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory.
  • the program 505 can be specifically used to make the processor 501 perform the following operations: obtain the feature point sequence of the operating arm and its corresponding kinematic model; obtain the joint variables sensed by the sensor, and obtain the virtual camera selected by the input part; The joint variables determine the projection points of each feature point in the feature point sequence on the projection plane of the virtual camera; orderly fit and connect the projection points to generate the projection image of the operating arm; and display the projected image on the display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Manipulator (AREA)

Abstract

一种手术机器人及其图形化控制装置、图形化显示方法,手术机器人包括:输入部;显示器(22);操作臂(31),具有由有序排列的特征点构成的特征点序列,特征点表征关节;及控制器,控制器与输入部、显示器(22)及传感器耦接,被配置成:获得操作臂(31)的特征点序列及其对应的运动学模型(S11);获取传感器感应的关节变量(S12),并获取输入部选择的虚拟相机(S13);根据运动学模型及关节变量确定特征点序列中各特征点在虚拟相机的投影平面的投影点(S14);有序的拟合连接各投影点生成操作臂(31)的投影图像(S15);在显示器(22)中显示投影图像(S16)。手术机器人便于医生全方位的观察操作臂(31)的运动状态。

Description

手术机器人及其图形化控制装置、图形化显示方法
本申请要求于2020年10月8日提交中国专利局、申请号为CN202011068091.4、申请名称为“手术机器人及其图形化控制装置、图形化显示方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及医疗器械领域,特别是涉及一种手术机器人及其图形化控制装置、图形化显示方法。
背景技术
微创手术是指利用腹腔镜、胸腔镜等现代医疗器械及相关设备在人体腔体内部施行手术的一种手术方式。相比传统手术方式微创手术具有创伤小、疼痛轻、恢复快等优势。
随着科技的进步,微创手术机器人技术逐渐成熟,并被广泛应用。手术机器人包括主操作台及从操作设备,从操作设备包括多个操作臂,这些操作臂包括具有图像末端器械的相机臂及具有操作末端器械的手术臂。主操作台包括显示器及手柄。医生在显示器显示的由相机臂提供的视野下操作手柄控制相机臂或手术臂运动。
然而,在大多数场景下,如图1所示,相机臂34A’提供的视野往往只能观察到手术臂34B’的局部区域,能够被观察到的区域为可视区域,可知还有大片不可视区域。医生无法在可视区域观察到相机臂34A’自身的状态,也无法在不可视区域观察到手术臂34B’之间或手术臂34B’与相机臂34A’之间存在已发生的碰撞或潜在会发生的碰撞,这种情况容易引发手术安全问题。
发明内容
基于此,有必要提供一种便于医生全方位的观察操作臂的运动状态的手 术机器人及其图形化控制装置、图形化显示方法。
一方面,本申请提供了一种手术机器人,包括:输入部;显示器;操作臂,包括多个关节及感应所述关节的关节变量的传感器,所述操作臂具有由有序排列的、用于关联相应所述关节的多个特征点构成的特征点序列;及控制器,所述控制器与所述输入部、显示器及所述传感器耦接,被配置成:获得所述操作臂的特征点序列及其对应的运动学模型;获取所述传感器感应的关节变量,并获取所述输入部选择的虚拟相机;根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点;有序的拟合连接各所述投影点生成所述操作臂的投影图像;在所述显示器中显示所述投影图像。
其中,所述控制器在根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点的步骤中,被配置成:根据所述运动学模型及所述关节变量获得所述特征点序列中各特征点在参考坐标系下的第一位置;将各所述第一位置分别转换成在所述虚拟相机坐标系下的第二位置;获取所述虚拟相机的虚拟焦距并根据所述虚拟焦距确定所述虚拟相机的投影平面;根据所述虚拟焦距获得各所述第二位置在所述投影平面的投影点。
其中,所述控制器在根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点的步骤中,被配置成:根据所述运动学模型及所述关节变量获得所述特征点序列中各特征点在参考坐标系下的第一位置;将各所述第一位置分别转换成在所述虚拟相机坐标系下的第二位置;获取各所述特征点对应的所述关节的轮廓信息;结合所述虚拟焦距及所述轮廓信息获得各所述第二位置在所述投影平面的投影点。
其中,所述控制器在有序的拟合连接各所述投影点生成所述操作臂的投影图像的步骤中,被配置成:结合所述轮廓信息有序的拟合连接各所述投影点生成所述操作臂的投影图像。
其中,所述控制器在有序的拟合连接各所述投影点生成所述操作臂的投 影图像的步骤中,被配置成:根据各所述投影点对应的特征点在所述特征点序列中的顺序有序的连接各所述投影点进而生成所述操作臂的投影图像。
其中,所述控制器在有序的拟合连接各所述投影点生成所述操作臂的投影图像的步骤中,被配置成:获取所述操作臂的末端器械的图标;根据所述关节变量及所述运动学模型确定所述末端器械在所述虚拟相机的投影平面的位姿;根据所述末端器械在所述虚拟相机的投影平面的位姿对所述图标进行旋转及/或缩放处理;将经处理后的所述图标拼接于远端的所述投影点进而生成所述投影图像。
其中,所述控制器在获取所述操作臂的末端器械的图标的步骤中,被配置成:获取所述操作臂的类型,并根据所述类型匹配出所述操作臂的末端器械的图标。
其中,所述虚拟相机具有可选择的虚拟焦距及/或虚拟光圈,所述控制器在根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点的步骤中,被配置成:获取所述输入部选择的所述虚拟相机的虚拟焦距及/或虚拟光圈,结合所述虚拟焦距及/或虚拟光圈、所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点。
其中,所述控制器被配置成在所述显示器中显示所述投影图像的步骤之前,执行:检测所述投影图像是否失真;在检测到所述投影图像失真时,增大所述虚拟相机的虚拟焦距并重新进入结合所述虚拟焦距及/或虚拟光圈、所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点的步骤;在检测到所述投影图像未失真时,进入在所述显示器中显示所述投影图像的步骤。
其中,所述控制器被配置成:获取各所述投影点在参考坐标系的位置;获得所述投影点中落入所述投影平面的边缘区域或所述显示器中用于显示所述投影图像的显示窗口的边缘区域的第一投影点的数量;计算所述第一投影点的数量在所述投影点的总数量中的比值,并在所述比值达到阈值时,判断 出所述投影图像失真。
其中,所述操作臂包括具有图像末端器械的相机臂;所述控制器还被配置成:获取所述相机臂的图像末端器械的相机参数,并根据所述相机参数计算所述图像末端器械的可见区域,所述相机参数包括焦距和光圈;根据所述相机臂的关节变量及运动学模型确定所述图像末端器械在参考坐标系下的位姿;根据在参考坐标系下所述图像末端器械的位姿及所述虚拟相机的位姿之间的转换关系将所述图像末端器械的可见区域换算为所述虚拟相机的可见区域;计算所述虚拟相机的可见区域在所述投影平面上的边界线,并在所述显示器显示的所述投影图像中显示所述边界线。
其中,所述操作臂包括具有图像末端器械的相机臂及具有操作末端器械的手术臂;所述控制器还被配置成在有序的拟合连接各所述投影点生成所述操作臂的投影图像的步骤中,执行:获取由所述相机臂的图像末端器械采集的手术区域的操作图像;从所述操作图像中识别出所述手术臂的特征部位;根据识别出的所述特征部位从所述特征点序列中匹配出关联的第一特征点;有序的拟合连接各所述投影点并标记所述投影点中关联所述第一特征点的第一投影点及与所述第一投影点连接的线段以生成所述操作臂的投影图像。
其中,所述特征点序列还包括未匹配到的第二特征点,所述控制器在根据识别出的所述特征部位从多个所述特征点序列中匹配出关联的第一特征点的步骤后,被配置成:获取未匹配到的所述第二特征点;结合所述第二特征点对应的特征部位的轮廓信息、关节变量及运动学模型生成相应所述特征部位的图像模型;将所述图像模型转换成在所述图像末端器械坐标系下的补充图像;根据所述第二特征点与所述第一特征点在所述特征点序列中的顺序关系将所述补充图像拼接到所述第一特征点对应的特征部位的图像以在所述操作图像中形成所述操作臂完整的子图像;显示具有所述操作臂完整的子图像的所述操作图像。
其中,所述控制器还被配置成:获取所述操作臂的在第一方向上的最大运动范围;根据所述操作臂的关节变量及运动学模型计算所述操作臂在所述 第一方向上的运动量;根据所述第一方向上的所述最大运动范围及所述运动量生成图标;在所述显示器中显示所述图标。
其中,所述第一方向是前后进给方向。
其中,所述图标是进度条或饼图。
其中,所述控制器被配置成在所述运动量增大或减小时,相应加深或减淡所述可变长度条的颜色。
其中,所述控制器被配置成:从所述操作臂中检测出当前受控制的第一操作臂,并在所述投影图像中标识出所述第一操作臂。
其中,多个可供所述输入部选择的所述虚拟相机在参考坐标系下具有不同的位姿。
其中,所述虚拟相机在参考坐标系下的位姿基于所述操作臂在参考坐标系下的可达工作空间而确定。
其中,所述虚拟相机在参考坐标系下的位姿基于所述操作臂在参考坐标系下的可达工作空间的并集空间而确定。
其中,所述虚拟相机在参考坐标系下的位置始终位于所述并集空间的外部,且所述虚拟相机在参考坐标系下的姿态始终朝向所述并集空间。
其中,所述虚拟相机具有可供选择的虚拟焦距,所述虚拟相机的位置位于第一区域以外,所述第一区域为最短所述虚拟焦距恰好能可见所述并集空间所确定的区域。
其中,所述虚拟相机具有可供选择的虚拟焦距,所述虚拟相机的位置位于第二区域以内,所述第二区域为最长所述虚拟焦距恰好能可见所述并集空间所确定的区域。
其中,所述虚拟相机的姿态始终朝向所述并集空间的中心。
其中,所述控制器被配置成在所述显示器的第一显示窗口中显示所述投影图像,并在所述第一显示窗口中生成多个可供选择的所述虚拟相机的图标。
其中,所述图标与所述投影图像的相对位置固定,随着所述投影图像视点的变换而变换。
其中,所述图标设置成六个,分别对应从左侧、右侧、上侧、下侧、前侧及后侧对所述操作臂进行虚拟成像以生成相应视点下的所述投影图像。
其中,所述图标展现为箭头图案或相机图案,所述图标被转动选择的任意一个对应一个所述虚拟相机。
其中,所述图标展现为可转动的球体,所述图标被旋转到达的任意一个位置对应一个所述虚拟相机。
其中,所述控制器被配置成在获取所述输入部选择的虚拟相机的步骤中,执行:获取所述输入部选择的虚拟相机及所述输入部输入的所述虚拟相机的至少两个目标位置;按照所述虚拟相机的预设运动速度并根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的每一目标位置下的投影平面的目标投影点;有序的拟合连接每一所述目标位置下的各所述目标投影点生成所述操作臂的目标投影图像;根据各所述目标投影图像生成动画;按照预设频率在所述显示器上播放所述动画。
其中,所述控制器被配置成在获取所述输入部选择的虚拟相机的步骤中,执行:获取所述输入部输入的虚拟相机的运动轨迹;离散所述运动轨迹获得所述虚拟相机的各离散位置以作为目标位置;按照所述虚拟相机的预设运动速度并根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的每一所述目标位置下的投影平面的目标投影点;有序的拟合连接每一目标位置下的各所述目标投影点生成所述操作臂的目标投影图像;根据各所述目标投影图像生成动画;按照预设频率在所述显示器上播放所述动画。
其中,所述操作臂包括具有图像末端器械的相机臂;所述控制器被配置成:获取所述图像末端器械采集的手术区域的操作图像;在所述显示器中显示所述操作图像;在所述操作图像中悬浮的显示所述投影图像。
其中,所述控制器被配置成在所述操作图像中悬浮的显示所述投影图像的步骤中,执行:获取所述操作图像与所述投影图像的重叠区域,并获得所述操作图像在所述重叠区域的部分的第一图像属性;根据所述第一图像属性 对所述投影图像在所述重叠区域的部分的第二图像属性进行调节。
其中,所述控制器被配置成:在所述操作臂中的第一操作臂达到事件的阈时,在所述投影图像中对所述第一操作臂的至少部分进行标识并显示于所述显示器。
其中,所述阈是警告阈,所述事件是要避免的情况。
其中,所述警告阈基于所述第一操作臂中至少一个关节的运动范围,所述要避免的情况是所述第一操作臂中至少一个关节的运动范围的限制。
其中,所述警告阈基于所述第一操作臂与所述操纵器中第二操作臂之间的距离,所述要避免的情况是所述第第一操作臂与所述第二操作臂之间的碰撞。
其中,所述控制器被配置成:获取所述第一操作臂与所述第二操作臂之间的最小距离并判断所述最小距离与所述警告阈之间的关系;当所述最小距离达到所述警告阈未到达要避免的情况相应的阈值时,形成对所述第一操作臂和所述第二操作臂的子图像上的最小距离点进行标识的第一标识。
其中,所述控制器被配置成:当所述最小距离达到达要避免的情况时,形成对所述第一操作臂和所述第二操作臂的模型上的最小距离点进行标识的第二标识。
其中,所述控制器被配置成在获取所述第一操作臂与所述第二操作臂之间的最小距离并判断所述最小距离与所述警告阈之间的关系的步骤中,执行:根据所述第一操作臂和所述第二操作臂的运动学模型及结构特征构建相应所述第一操作臂和所述第二操作臂的几何模型;离散所述第一操作臂和所述第二操作臂的几何模型获得在参考坐标系下所述第一操作臂和所述第二操作臂的外部信息点集;根据所述第一操作臂和所述第二操作臂的外部信息点集确定所述第一操作臂和所述第二操作臂之间的最小距离;对所述第一操作臂和所述第二操作臂的子图像上的最小距离点进行标识中,包括:确定所述最小距离对应的最小距离点,并对所述第一操作臂和所述第二操作臂的模型上的最小距离点进行标识。
其中,所述控制器被配置成:当所述最小距离达到所述警告阈,根据所述第一操作臂和所述第二操作臂的子图像上的最小距离点在参考坐标系下的位置确定碰撞方向;对所述第一操作臂和所述第二操作臂之间的所述碰撞方向进行标识。
其中,所述手术机器人包括与所述控制器耦接、且用于控制所述操作臂运动的机械手柄,所述控制器被配置成:根据所述碰撞方向产生阻碍所述机械手柄在关联方向上移动的阻力。
其中,所述机械手柄具有多个关节组件及驱动各所述关节组件运动的驱动电机,各所述驱动电机与所述控制器耦接,所述控制器被配置成:根据所述阻力使关联方向上的所述驱动电机产生反向力矩。
其中,所述控制器被配置成:在所述最小距离介于所述警告阈和要避免的情况相应的阈值之间时,所述反向力矩的大小与所述最小距离的大小呈负相关。
另一方面,本申请提供了一种手术机器人的图形化显示方法,所述手术机器人包括:输入部;显示器;操作臂,包括多个关节及感应所述关节的关节变量的传感器,多个所述关节构成定位自由度及/或定向自由度,所述操作臂具有由有序排列的特征点构成的特征点序列,所述特征点表征所述关节;所述控制方法包括如下步骤:获得所述操作臂的特征点序列及其对应的运动学模型;获取所述传感器感应的关节变量,并获取所述输入部选择的虚拟相机;根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点;有序的拟合连接各所述投影点生成所述操作臂的投影图像;在所述显示器中显示所述投影图像。
另一方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被配置为由处理器加载并执行实现如上述任一项实施例所述的图形化显示方法的步骤。
另一方面,本申请提供了一种手术机器人的图形化控制装置,包括:存储器,用于存储计算机程序;及处理器,用于加载并执行所述计算机程序; 其中,所述计算机程序被配置为由所述处理器加载并执行实现如上述任一项实施例所述的图形化显示方法的步骤。
本申请的手术机器人及其图形化控制装置、图形化显示方法,具有如下有益效果:
通过配置虚拟相机来模拟真实相机对操作臂进行成像,可以实现对全部操作臂以及每一操作臂整体的观察,有利于医生全方位的观察操作臂的运动状态,进而有助于手术的可靠性及连续性。
附图说明
图1为现有技术手术机器人一手术状态下的局部示意图;
图2为本申请手术机器人一实施例的结构示意图;
图3为图1所示手术机器人一实施例的局部示意图;
图4为手术机器人图形化显示方法一实施例的流程图;
图5为手术机器人中操作臂与动力部的结构示意图;
图6为图1所示手术机器人一实施例的虚拟相机布局示意图;
图7为图1所示手术机器人一实施例的虚拟相机的配置界面示意图;
图8为图4所示图形化显示方法一实施例的投影成像的原理图;
图9~图10分别为图形化显示方法一实施例的显示界面示意图;
图11为手术机器人图形化显示方法一实施例的流程图;
图12~图13分别为图形化显示方法一实施例的显示界面示意图;
图14为手术机器人图形化显示方法一实施例的流程图;
图15为图形化显示方法一实施例的显示界面示意图;
图16~图17分别为手术机器人图形化显示方法一实施例的流程图;
图18为图形化显示方法一实施例的显示界面示意图;
图19为手术机器人图形化显示方法一实施例的流程图;
图20~图21分别为图1所示手术机器人一实施例的虚拟相机的配置界面示意图;
图22~图23分别为手术机器人图形化显示方法一实施例的流程图;
图24为图1所示手术机器人一实施例的虚拟相机的配置界面示意图;
图25~图27分别为手术机器人图形化显示方法一实施例的流程图;
图28为采用大视场角观察操作臂的示意图;
图29为采用如图28所示大视场角生成的显示界面示意图;
图30为调整如图28所示大视场角之后生成的显示界面示意图;
图31为手术机器人图形化显示方法一实施例的流程图;
图32为图1所示手术机器人一实施例的局部示意图;
图33~图34分别为图形化显示方法一实施例的显示界面示意图;
图35~图38分别为手术机器人图形化显示方法一实施例的流程图;
图39为本申请手术机器人另一实施例的结构示意图;
图40为本申请一实施例的手术机器人的图形化控制装置的结构示意图。
具体实施方式
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的较佳实施方式。但是,本申请可以以许多不同的形式来实现,并不限于本申请所描述的实施方式。相反地,提供这些实施方式的目的是使对本申请的公开内容理解的更加透彻全面。
需要说明的是,当元件被称为“设置于”另一个元件,它可以直接在另一个元件上或者也可以存在居中的元件。当一个元件被认为是“连接”另一个元件,它可以是直接连接到另一个元件或者可能同时存在居中元件。当一个元件被认为是“耦接”另一个元件,它可以是直接耦接到另一个元件或者可能同时存在居中元件。本申请所使用的术语“垂直的”、“水平的”、“左”、“右”以及类似的表述只是为了说明的目的,并不表示是唯一的实施方式。本申请所使用的术语“远端”、“近端”作为方位词,该方位词为介入医疗器械领域惯用术语,其中“远端”表示手术过程中远离操作者的一端,“近端”表示手术过程中靠近操作者的一端。本申请所使用的术语“第一/第二”等表 示一个部件以及一类具有共同特性的两个以上的部件。
除非另有定义,本申请所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了描述具体的实施方式的目的,不是旨在于限制本申请。本申请所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
如图2至图3所示,其分别为本申请手术机器人一实施例的结构示意图,及其局部示意图。
手术机器人包括主操作台2及由主操作台2控制的从操作设备3。主操作台2具有运动输入设备21及显示器22,医生通过操作运动输入设备21向从操作设备3发送控制命令,以令从操作设备3根据医生操作运动输入设备21的控制命令执行相应操作,并通过显示器22观察手术区域。其中,从操作设备3具有臂体机构,臂体机构具有机械臂30及可拆卸地装设于机械臂30远端的操作臂31。机械臂30包括依次连接的基座及连接组件,连接组件具有多个关节组件。操作臂31包括依次连接的连杆32、连接组件33及末端器械34,其中,连接组件33具有多个关节组件,通过调节操作臂31的关节组件调节末端器械34的姿态;末端器械34具有图像末端器械34A及操作末端器械34B。图像末端器械34A用于采集视野内的图像,显示器22用于显示该图像。操作末端器械34B用于执行手术操作如剪切、缝合。
图1展示的手术机器人为单孔手术机器人,各操作臂31通过装设于机械臂30远端的同一个穿刺器4插入至患者体内。在单孔手术机器人中,医生一般仅对操作臂31进行控制以完成基本手术操作。此时,单孔手术机器人的操作臂31应当同时具有位置自由度(即定位自由度)和姿态自由度(即定向自由度),以实现在一定范围内位姿的变化,例如操作臂31具有水平移动自由度x、竖直移动自由度y,自转自由度α、俯仰自由度β及偏航自由度γ,操作臂31还可以在机械臂30远端关节即动力机构301的驱动下实现前后移动自由度(即进给自由度)z。例如,动力机构301具有导轨和滑动设置于导轨上的动力部,操作臂可拆卸的装设于动力部上,一方面,动力部在导轨上的 滑动提供操作臂31前后移动自由度z,另一方面,动力部为操作臂31的关节提供动力实现其余5个自由度(即[x,y,α,β,γ])。
手术机器人还包括控制器。控制器可以集成于主操作台2,也可以集成于从操作设备3。当然,控制器也可以独立于主操作台2和从操作设备3,其例如可部署在本地,又例如控制器可以部署在云端。其中,控制器可以由一个以上的处理器构成。
手术机器人还包括输入部。输入部可以集成于主操作台2。输入部可以也可以集成于从操作设备3。当然,输入部也可以独立于主操作台2和从操作设备3。输入部例如可以是鼠标、键盘、语音输入装置、触摸屏。一实施例中,采用触摸屏作为输入部,触摸屏设置于主操作台2的扶手上,可供配置的信息可显示于该触摸屏,例如待选择的虚拟相机及其虚拟相机参数等。其它实施例中,可供配置的信息可显示于主操作台2的显示器22或外置其它显示器。
操作臂31还包括感应关节的关节变量的传感器。这些传感器包括感应关节组件转动运动的角度传感器及感应关节组件线性运动的位移传感器,具体可根据关节的类型来配置适应的传感器。
控制器与这些传感器耦接,并与输入部及主操作台2的显示器22耦接。
一实施例中,提供一种手术机器人的图形化显示方法,该图形化显示方法可以由控制器执行。参阅图4,该图形化显示方法包括如下步骤:
步骤S11,获得操作臂的特征点序列及操作臂对应的运动学模型。
示例性的,如图5所示,操作臂31的驱动盒310抵接于动力机构301的动力部302的抵接面装设有存储单元311,相应在动力部302抵接于驱动盒310的抵接面装设有与存储单元311配套的读取单元303,该读取单元303与控制器耦接,操作臂31装设于动力部302时,读取单元303与耦接存储单元311通讯,读取单元303从存储单元311中读取相关信息。该存储单元311例如是存储器、电子标签。存储单元例如存储有操作臂的类型、特征点序列以及预先根据操作臂的连杆参数构建的运动学模型中的一种或两种以上的组 合。特征点序列包括多个特征点,特征点可以表征操作臂中任意的特征部位,特征部位可以指操作臂的末端器械、关节、连杆中的一种或两种以上。
例如,存储单元311存储有操作臂的特征点序列及运动学模型,可以直接从该存储单元311获得所需要的操作臂的特征点序列及运动学模型。
又例如,存储单元311仅存储有操作臂的类型,而在其它与控制器耦接的存储单元存储有不同类型的操作臂的特征点序列及运动学模型。可以根据获取的操作臂的类型获得相应操作臂的特征点序列及运动学模型。
步骤S12,获取传感器感应的操作臂中各关节的关节变量。
关节变量指关节中转动关节的关节量及/或移动关节的关节偏移量。
步骤S13,获取输入部选择的虚拟相机。
顾名思义,虚拟相机为非实际存在的相机,其不会真实的采集物体的图像,其体现的仅是一种视点的概念,如图6所示,图6示意了一种虚拟相机相对于操作臂的空间分布图,默认的虚拟相机100可以定义成其中的任何一个,例如默认选择为穿刺器4上的虚拟相机100。可以对虚拟相机进行参数配置,虚拟相机的虚拟相机参数(即配置参数)至少包括(虚拟)位姿,相应于真实相机的相机参数如焦距及/或光圈,虚拟相机参数同样包括虚拟焦距及/或虚拟光圈。通常,(虚拟)焦距对应可调(虚拟)相机的视场角,(虚拟)光圈对应可调(虚拟)相机的景深。一实施例中,也可以描述虚拟相机参数包括视场角及/或景深,对于虚拟相机而言,视场角及/或景深也是虚拟的。即使相机、焦距、光圈是虚拟的,但同样可以利用如真实相机那样的成像原理以实现本申请的主旨。不同虚拟相机参数可以向医生展示不同的成像效果。
这些虚拟相机参数可以固化在存储于手术机器人的存储器的系统配置文件中,通过控制器读取该系统配置文件即可获取。这些虚拟相机参数还可以由医生在手术前或手术中根据需要通过一与控制器耦接的输入部来进行手动设置,这种设置方式是按需的,例如,这些虚拟相机参数可以通过在文本控件输入相关数据得到,又例如,这些虚拟相机参数可以通过从选项控件选取得到。
该虚拟相机的位姿可以相同于真实相机(即图像末端器械)的位姿,以从与真实相机相同的视点对操作臂进行观察。该虚拟相机的位姿也可以不同于真实相机的位姿,以从与真实相机不同的视点对操作臂进行观察。通常,可以选择虚拟相机的位姿不同于真实相机的位姿来进行观察,有助于获取到操作臂更全面的信息,例如此时操作臂还可以是相机臂,以由虚拟相机进行观察。
其中,在获取到的虚拟相机中,包括获取该虚拟相机的位姿、以及该虚拟相机的虚拟相机参数。
仅从理论上考虑,虚拟焦距最长可以无限大,最短可以无限趋近于0。示例性的,可以仿照具有例如焦距范围2mm~70mm的真实相机的镜头来配置该虚拟相机可供选择的虚拟焦距,例如可配置为2mm~50mm的虚拟焦距,如2mm、5mm、10mm、20mm。进而根据最短虚拟焦距及/或最长虚拟焦距来配置该虚拟相机的位置。其中,虚拟焦距越小,投影图像越大,越能查看局部细节;虚拟焦距越大,投影图像越小,越能查看全局。
仅从理论上考虑,虚拟光圈最大可以无限大,最小可以无限趋近于0。示例性的,可以仿照具有例如光圈范围F1,F1.2,F1.4,F2,F2.8,F4,F5.6,F8,F11,F16,F22,F32,F44,F64的真实相机的镜头来配置该虚拟相机可供选择的虚拟光圈。例如可配置为F2.8,F4,F5.6,F8的虚拟光圈。其中,虚拟光圈越大,景深越小;其中,虚拟光圈越小,景深越大。
如图7所示,示意了一种虚拟相机的配置界面。可以在该界面上选择虚拟相机的位姿、虚拟焦距及虚拟光圈。
步骤S14,根据操作臂的运动学模型及关节变量确定操作臂的特征点序列中各特征点在虚拟相机的投影平面的投影点。
例如,可以先根据操作臂的运动学模型及关节变量计算出特征点序列中各特征点在参考坐标系下的第一位置,然后根据虚拟相机坐标系与参考坐标系的坐标转换关系将该第一位置转换成虚拟相机坐标系下的第二位置,最后将第二位置投影成虚拟相机的投影平面的第三位置作为投影点。其中,虚拟 相机的投影平面通常关联于虚拟相机的虚拟焦距,因而通常可以根据虚拟相机的虚拟焦距而确定虚拟相机的投影平面,进而相当于可根据该虚拟焦距获得各第二位置在投影平面的投影点。上述的参考坐标系可以设置在任何地方,通常考虑设置在手术机器人上,较佳的,设置在从操作设备上。例如,参考坐标系是从操作设备的基坐标系。又例如,参考坐标系是从操作设备中机械臂的工具坐标系。
进一步地,根据该虚拟焦距获得各第二位置在投影平面的投影点的实现过程可以分为如下两个步骤:第一步骤,获取各特征点表征(关联)的关节的轮廓信息,该轮廓信息举例包括尺寸信息及/或线型信息等;及第二步骤,结合虚拟焦距及轮廓信息获得各第二位置在投影平面的投影点。
其中,上述的第一位置、第二位置及第三位置可以是一个点位置,也可以是一个由多个点构成的区域,可知,投影点可以被理解成一个点或一个点集,具体根据特征点的选择而确定,即特征点本身选取的即为特征部位的一个点时,投影点即一个点;而特征点本身选取的即为特征部位的一个点集(即“区域”的概念)时,投影点对应为一个点集(即区域)。如果特征点的点集可以反映特征部位的几何尺寸,那么投影点也可以反映特征部位的几何尺寸,进而可以近似的展示操作臂的真实结构,以更加利于展示操作臂的运动状态。
此外,选取的特征点的数量越多或越密集,越能近似的展示操作臂的真实结构。例如,通过拟合连接这些投影点,可以更准确地体现操作臂的线型特征,如直线型,如曲线型,如曲线弧度等。
在该步骤中,更具体的,可以结合虚拟相机的虚拟焦距(虚拟视场角)及/或虚拟光圈(景深)、运动学模型及关节变量确定特征点序列中各特征点在虚拟相机的投影平面的投影点。
参阅图8,图8示意了一种投影原理。该操作臂具有特征点序列,该特征点序列包括特征点Q1、Q2、Q3及Q4,在虚拟相机的虚拟成像下,在投影平面获得投影点序列,该投影点序列对应为q1、q2、q3及q4。
示例性的,以特征点Q1和Q2为例进行说明,根据运动学模型及关节变 量获得了Q1和Q2在空间中的位置分别为Q1(X1,Y1,Z1)和Q2(X2,Y2,Z2)。结合虚拟焦距确定该特征点Q1和特征点Q2在投影平面的投影点q1(x1,y1)和q2(x2,y2)可以通过如下公式获得:
x1=fx*(X1/Z1)+cx;
y1=fy*(Y1/Z1)+cy;
x2=fx*(X12/Z12)+cx;
y2=fy*(Y12/Z12)+cy;
其中,fx为水平方向焦距,fy为竖直方向焦距,cx为水平方向相对光轴偏移,cy为竖直方向相对光轴偏移。其中,fx与fy的数值可以相等,也可以不等。
步骤S15,有序的拟合连接各投影点生成操作臂的投影图像。
该步骤可以根据投影点对应的特征点在特征点序列中的顺序有序的连接各投影点以生成操作臂的投影图像,这里的“有序”指的是投影点之间对应的顺序,而并非先连哪个投影点后连哪个投影点,根据投影点之间对应的顺序从相应于操作臂真实结构的近端向远端依次连接、或者从远端向近端依次连接、或者从中间向两端依次连接均是可行的。
此外,该步骤可以结合特征部位的轮廓信息有序的拟合连接各投影点以生成操作臂的投影图像。例如,操作臂各特征部位实际的几何尺寸大致相当时,可以用与投影点尺寸相当的线段连接各投影点。
此外,“拟合连接”可指贴近特征部位线型特征的连接方式,例如,对于整体呈直线型的操作臂而言,用直线段连接相邻投影点,又例如,对于至少部分呈曲线型的操作臂而言,用曲线段连接呈曲线型的部分所对应的投影点。拟合连接的方式可以体现操作臂的线型特性。
继续参阅图8,将q1、q2、q3及q4进行有序的拟合连接即可得到相应于操作臂的投影图像。
步骤S16,在显示器中显示投影图像。
通过上述步骤S11~步骤S16,医生可以通过投影图像观察到所有操作臂 及每个操作臂完整的特征部位的运动状态而不再会存在视野盲区,有助于辅助医生可靠且连续地实施手术。结合30和图33参阅,图32中在真实相机的真实可视区域外操作臂31b和31c可能发生潜在碰撞而无法观察到,而通过上述步骤S11~步骤S16,借助生成的投影图像可以观察到该潜在可能会发生的碰撞。
图9示意了一种显示界面,该显示界面仅生成了手术臂的投影图像。图10示意了另一种显示界面,该显示界面同时生成了手术臂及相机臂的投影图像。图9和图10中的投影图像均反映了对应操作臂的各特征点的运动状态。
上述实施例中,由于投影图像由一系列有序连接的投影点所形成,而这些特征点有可能并不容易直观的体现操作臂的末端器械的结构特征,因而为了较容易的体现末端器械的结构特征,如图11所示,控制器可以被配置成在上述步骤S15,即有序的拟合连接各投影点生成操作臂的投影图像的步骤中,执行:
步骤S151,获取操作臂的末端器械的图标。
例如,可以先获取操作臂的类型,然后根据操作臂的类型匹配出操作臂的末端器械的图标。又例如,可以根据获取的特征点序列匹配出操作臂的末端器械的图标。这些图标预先与操作臂的类型及/或特征点序列相关联的存储于存储单元中。
步骤S152,根据关节变量及运动学模型确定末端器械在虚拟相机坐标系下的位姿。
步骤S153,根据末端器械在虚拟相机坐标系下的位姿对图标进行旋转及/或缩放处理。
其中,通常根据末端器械在虚拟相机坐标系下的位置对图标进行缩放,根据末端器械在虚拟相机坐标系下的姿态(方向)对图标进行旋转。
步骤S154,将经处理后的图标拼接于远端的投影点进而生成投影图像。
参阅图12,图12示意了一种显示界面,该显示界面在投影图像中显示了相应操作臂的末端器械的形状,当然,该投影图像并没有反映出相应操作 臂的轮廓形状。参阅图13,图13示意了另一种显示界面,该显示界面在投影图像中同样显示了相应操作臂的末端器械的形状,当然,该投影图像反映出了相应操作臂的轮廓形状。
一实施例中,本申请的手术机器人中,操作臂包括具有图像末端器械的相机臂及/或具有操作末端器械的手术臂。如图14所示,控制器还被配置成执行如下步骤:
步骤S21,检测操作臂中是否具有相机臂。
该步骤S21可以由用户通过输入部触发。该检测步骤例如还可以通过这样来实现:获取操作臂的类型,然后根据操作臂的类型来判断操作臂中是否包括相机臂。当然,手术时都必须具有相机臂。
在该步骤中检测到操作臂中具有相机臂时,进入步骤S22。
步骤S22,获取相机臂的图像末端器械的相机参数,并根据相机参数计算图像末端器械的可见区域。
其中,图像末端器械的相机参数包括焦距和光圈。
步骤S23,根据相机臂的关节变量及运动学模型确定图像末端器械在参考坐标系下的位姿。
步骤S24,根据在参考坐标系下图像末端器械的位姿及虚拟相机的位姿之间的转换关系将图像末端器械的可见区域换算为虚拟相机的可见区域。
步骤S25,计算虚拟相机的可见区域在投影平面上的边界线,并在显示器显示的投影图像中显示边界线。
参阅图15,图15示意了一种显示界面,该显示界面的投影图像中示出了图像末端器械的可视区域,该可视区域外的部分即为非可视区域。
通过上述步骤S21~步骤S25,这样便于医生从投影图像中明确感知操作臂中哪些部分属于真实视野下可见的,哪些部分是真实视野下不可见的。
一实施例中,如图16所示,控制器还被配置成在上述步骤S15,即有序的拟合连接各投影点生成操作臂的投影图像的步骤中,执行如下步骤:
步骤S151’,获取由相机臂的图像末端器械采集的手术区域的操作图像。
步骤S152’,从操作图像中识别出手术臂的特征部位。
可以采用图像识别的方式。更佳的,可以结合神经网络如卷积神经网络的方式来进行图像识别。
步骤S153’,根据识别出的特征部位从特征点序列中匹配出关联的第一特征点。
其中,在该特征点序列中,除了可以匹配到的第一特征点之外,还包括未匹配到的第二特征点。应当理解的,“第一特征点”指的是一类特征点,在本文中其指根据识别出的特征部位匹配出的全部特征点,其可能是一个也可能是两个以上。“第二特征点”指的是另一类特征点,在本文中其指特征点序列中除第一特征点之外剩余的全部特征点,其同样可能是一个也可能是两个以上。
步骤S154’,有序的拟合连接各所述投影点并标记所述投影点中关联所述第一特征点的第一投影点及与所述第一投影点连接的线段以生成所述操作臂的投影图像。
尤其适用于在特征点比较密集的场合例如每个特征部位均对应由两个以上特征点表征的场合下,通过上述步骤S151’~步骤S154’,即通过标记第一投影点及与其连接线段,能够较好的展现出操作臂在图像末端器械下可视的部分及不可视的部分。
一实施例中,参阅图17,控制器还可以被配置成在步骤S153’,即根据识别出的特征部位从特征点序列中匹配出关联的第一特征点的步骤后,执行如下步骤:
步骤S155’,获取未匹配到的第二特征点。
简单而言,从特征点序列中排除第一特征点就可以得到第二特征点。
步骤S156’,结合第二特征点对应的特征部位的轮廓信息、关节变量及运动学模型生成相应特征部位的图像模型。
这个图像模型可以是重构的计算机模型或者计算获得的投影模型。
步骤S157’,将图像模型转换成在图像末端器械坐标系下的补充图像。
步骤S158’,根据第二特征点与第一特征点在特征点序列中的顺序关系将补充图像拼接到第一特征点对应的特征部位的图像以在操作图像中形成操作臂完整的子图像。
步骤S159’,在显示器中显示具有操作臂完整的子图像的操作图像。
图18示意了一种显示界面,该显示界面将操作图像不完整的操作臂进行了补充。
通过上述步骤S155’~步骤S159’,也能够辅助医生观看到真实相机观看不到的操作臂的部分特征部位。
一实施例中,参阅图19,控制器还可以被配置成执行如下步骤:
步骤S31,获取操作臂的在第一方向上的最大运动范围。
步骤S32,根据操作臂的关节变量及运动学模型计算操作臂在第一方向上的运动量。
步骤S33,根据第一方向上的最大运动范围及运动量生成图标。
该最大运动范围可以预先存储于前述的存储单元中。
步骤S34,在显示器中显示图标。
这样的图表显示可继续参阅图9、图12及图13。
该第一方向可以是前后进给方向、左右移动方向、上下移动方向、自转方向、俯仰方向、偏航方向中一种或两种以上,具体可根据操作臂所具有的有效自由度来进行配置。示例性的,该第一方向是前后进给方向。
该图标可以是进度条或饼图。例如,在进度条中,最大运动范围呈固定长度条,运动量呈固定长度条长度范围内的可变长度条。其中,在运动量增大或减小时,相应可加深或减淡可变长度条的颜色。此外,也可以单独或结合计算运动量在最大运动范围中的比例值并在进度条的显示区域进行显示,比如显示在运动量的可变长度条中。
通过上述步骤S31~步骤S34,可以起到提示医生在相应方向注意运动范围的作用。
一实施例中,该控制器还可以被配置成:从操作臂中检测出当前受控制 的第一操作臂,进而在投影图像中标识出第一操作臂。这样可以在显示器中差异化的显示出受控的操作臂和非受控的操作臂。其中,操作臂是否受控可以根据是否检测到主动控制该操作臂的启动命令来进行判断。
上述实施例中,可供输入部选择的不同虚拟相机在参考坐标系下具有不同的位姿,以从不同位置及/或姿态(方向)模拟真实相机如图像末端器械来观察操作臂。
一实施例中,虚拟相机在参考坐标系下的位姿可以基于操作臂在参考坐标系下的可达工作空间(简称可达空间)来进行确定。这样可以让虚拟相机的位姿跟其可达工作空间产生关联性以便于确定。
进一步地,虚拟相机在参考坐标系下的位姿可以基于操作臂在参考坐标系下的可达工作空间的并集空间来进行确定。操作臂只有一个时,这个并集空间相等于该操作臂的可达工作空间。操作臂为两个以上时,这个并集空间是各操作臂的可达工作空间的并集所对应的空间。其中,各操作臂在参考坐标系下的可达工作空间可以根据该操作臂的运动学模型确定,并存储于前述的存储单元中以供直接调用。当然,各操作臂在参考坐标系下的可达工作空间也可以根据该操作臂的运动学模型以在每次启动手术机器人时重新计算一次或多次。
更进一步地,虚拟相机在参考坐标系下的位置始终位于并集空间的外部,且虚拟相机在参考坐标系下的姿态始终朝向并集空间。
这样确定的虚拟相机的位姿能够满足始终能完整地观察各操作臂的运动状态,包括可观察各操作臂的运动状态及可观察操作臂之间的运动状态。
该虚拟相机被配置有可供选择的虚拟焦距。一实施例中,虚拟相机的位置只要位于最短虚拟焦距恰好能可见整个并集空间所确定的区域以外即可。一实施例中,虚拟相机的位置只要位于最小虚拟焦距恰好能可见整个并集空间所确定的区域以内也可行。一实施例中,该虚拟相机的位置可以由可供配置的最长焦距及最短焦距来共同进行限定,它位于由最长焦距确定的第一区域及最短焦距确定的第二区域之间的交集区域。
该虚拟相机的姿态(方向)始终朝向并集空间中某一相对确定的点或区域。一实施例中,虚拟相机的姿态始终朝向并集空间的中心。这样能够保证虚拟相机的虚拟成像面始终能对各操作臂进行虚拟成像。
一实施例中,控制器可以被配置成在显示器的第一显示窗口中显示投影图像,并在第一显示窗口中生成可供选择的虚拟相机的图标。
其中,对应虚拟相机的图标与投影图像的相对位置可以是固定的,随着投影图像视点的变换而同步变换。投影图像视点(即坐标)的变换跟选择的虚拟相机的位置相关。
示例性的,可以将对应虚拟相机的图标设置成六个,即代表六个不同位置的虚拟相机,该六个图标例如分别对应从左侧、右侧、上侧、下侧、前侧及后侧对操作臂进行虚拟成像以生成相应视点下的投影图像。
示例性的,图标展现为箭头图案或相机图案,图标被转动选择的任意一个对应一个虚拟相机。该图标例如还可以是一个点、或一个圆圈等。如图22所示,图标展现为箭头图案,箭头所示为视场角的调整方向。
示例性的,图标展现为可转动的球体,球体被旋转到达的任意一个位置对应一个虚拟相机。例如,可以将球体表面的任意一个位置与前述的第一区域、第二区域及/或该第一区域和第二区域的交集区域的某些位置进行对应,因此球体被旋转的任意一个位置都可以代表一个虚拟相机。当然,这些虚拟相机的姿态都朝向可达空间内的某一确定的点,以保证能够看到各完整的操作臂。如图23所示,图标展现为球体,箭头所示为视场角的可调整方向。
一实施例中,参阅图22,控制器被配置成在上述步骤S13,即获取输入部选择的虚拟相机的步骤中,执行:
步骤S131,获取输入部选择的虚拟相机及输入部输入的虚拟相机的至少两个目标位置。
在该步骤中,选择的虚拟相机主要指选定虚拟相机的虚拟焦距及/或虚拟光圈;输入的虚拟相机的至少两个目标位置可以是离散的两个以上的位置,也可以是连续的两个以上的位置。
在输入目标位置时,可以同时设置循迹模式,如单次循迹投影模式、多次循迹投影模式及往复循迹投影模式。一些列的目标位置中包括起始位置A和终点位置B,对于单次循迹投影模式而言,只进行一次A至B中每个目标位置的投影;对于多次循迹投影模式而言,进行指定次数A至B中每个目标位置的投影;对于往复循迹投影模式而言,反复进行A至B至A至B……中每个目标位置的投影。对于单次循迹投影模式及多次循迹投影模式而言,在A至B的投影过程整体结束之后,虚拟相机可以停留在一个指定位置持续进行投影,该指定位置可以是A至B中任意一个位置如A或B,也可以是其它默认的位置。
步骤S132,按照虚拟相机的预设运动速度并根据运动学模型及关节变量确定特征点序列中各特征点在虚拟相机的每一目标位置下的投影平面的目标投影点。
步骤S133,有序的拟合连接每一目标位置下的各目标投影点生成操作臂的目标投影图像。
步骤S134,根据各目标投影图像生成动画。
步骤S135,按照预设频率在显示器上播放动画。
通过上述步骤S131~步骤S135,医生可以动态的观察操作臂相互位置关系和投影信息,解决单一视角下部分信息重叠或投影失真的情况,可从多方位了解空间位置信息。
一实施例中,控制器被配置成在上述步骤S13,即获取输入部选择的虚拟相机的步骤中,执行:图23:
步骤S1311’,获取输入部输入的虚拟相机的运动轨迹。
例如,该运动轨迹可以是光标移动的轨迹,又例如,该运动轨迹可以是手指的滑动轨迹。为便于实施,示例性的,该运动轨迹的起始位置是前述某个图标对应的虚拟相机的位置,该起始位置具有坐标为(x0,y0,z0),在该运动轨迹中,其它位置的坐标保持Z轴坐标不变,而只改变X轴坐标和Y轴坐标。其它实施例中,该运动轨迹的起始位置也并不一定是前述某个图标对应 的虚拟相机的位置,但通常需要先指定整个轨迹的Z轴坐标,进而只改变X轴坐标和Y轴坐标即可。如图24所示,其示意了一种虚拟相机运动轨迹的配置界面。
步骤S1312’,离散运动轨迹获得虚拟相机的各离散位置以作为目标位置。
步骤S132,按照虚拟相机的预设运动速度并根据运动学模型及关节变量确定特征点序列中各特征点在虚拟相机的每一目标位置下的投影平面的目标投影点。
步骤S133,有序的拟合连接每一目标位置下的各目标投影点生成操作臂的目标投影图像。
步骤S134,根据各目标投影图像生成动画。
步骤S135,按照预设频率在显示器上播放动画。
一实施例中,如图25所示,该控制器通常还被配置成执行如下步骤:
步骤S41,获取图像末端器械采集的手术区域的操作图像。
步骤S42,在显示器中显示操作图像。
步骤S43,在操作图像中悬浮的显示投影图像。
这里意味着可以较容易的改变投影图像在操作图像中的位置。例如,在显示器中生成有一浮动窗口,浮动窗口显示投影图像,显示器其余区域显示操作图像。这样有助于让投影图像可以视需要避开操作图像中一些关键的位置而利于手术实施。
一实施例中,如图26所示,该控制器还可以被配置成在步骤S43中,即在操作图像中悬浮的显示投影图像的步骤中,执行:
步骤S431,获取操作图像与投影图像的重叠区域,并获得操作图像在重叠区域的部分的第一图像属性。
步骤S432,根据第一图像属性对投影图像在重叠区域的部分的第二图像属性进行调节。
这些图像属性包括颜色、饱和度、色调、亮度、对比度中的一种及两种以上的组合。例如颜色、亮度、对比度中的一种及两种以上的组合。
通过上述步骤S431~步骤S432,可以根据操作图像的图像属性对投影图像的图像属性自适应的进行调整。例如,在操作图像较暗时,可以增亮投影图像,或可以改变投影图像的颜色,以使投影图像相对操作图像比较显著以易于医生观察。
一实施例中,如图27所示,该控制器还可以被配置成在步骤S16中,即在显示器中显示投影图像的步骤之前,执行:
步骤S161,检测投影图像是否失真。
在检测到投影图像失真时,进入步骤S162;而在检测到投影图像未失真时,进入步骤S16。
示例性的,投影图像是否失真可以这样进行判断:步骤一、获取各投影点在参考坐标系的位置;步骤二、获得投影点中落入边缘区域内的第一投影点的数量;步骤三、计算第一投影点的数量在投影点的总数量中的比值,并在比值达到阈值时,判断出投影图像失真。
该边缘区域例如可以基于显示投影图像的显示窗口或投影平面划分得到。
步骤S162,增大虚拟相机的虚拟焦距。
即根据焦距与视场角之间几乎成反比的关系,减小视场角。
步骤S14’,结合虚拟焦距及/或虚拟光圈、运动学模型及关节变量确定特征点序列中各特征点在虚拟相机的投影平面的投影点的步骤。
请结合图28~28,图28示意了一种用大视场角观察操作臂的示意图;图29示意了在图28所示的视场角下生成的具有第一投影图像的显示界面,可见该投影图像的边缘区域存在压缩问题即失真了;图30示意了视场角调整后重新生成的具有第二投影图像的显示界面,可见该投影图像的边缘区域得到了展开即消除了失真问题。
上述步骤S162示例性的可以为按比例系数增大虚拟相机的虚拟焦距。简单而言,可以根据公式(1):F=k*f来重新确定,k为调整系数,k>1;f为调整前焦距;F为调整后焦距。
一实施例中,该虚拟焦距还可以根据如下公式(2)来重新确定,例如以 重新确定水平方向的虚拟焦距为例进行说明,该公式(2)为:
fx=k1*Fx*fx0;
其中,fx0为投影平面中心位置处的焦距;Fx为某一投影点在投影画面上距离中心位置沿X轴方向的距离;k1为设置系数;fx为某一投影点处的x方向焦距。为了增大虚拟焦距只要满足k1*Fx>1即可。
该公式(2)将虚拟相机的虚拟焦距与投影点的位置进行关联,即虚拟焦距与投影点的位置相关,待调整的虚拟焦距随投影点的变化而变化。其中,x代表投影平面中的任意一点,在投影平面中投影点P的位置表示为P(Fx,Fy)。
根据公式(2)相同的原理,还可以确定竖直方向的虚拟焦距、重新确定水平方向相对光轴偏移cx、及重新确定竖直方向相对光轴偏移cx,分别可以通过如下类似方法实现:
fy=k2*Fy*fy0;
cx=k3*Fx*cx0;
cy=k4*Fy*cy0;
通过上述步骤S161~步骤S162,通过展开投影图像,可以解决大视场角条件下视场边缘的特征点具有投影压缩从而丧失观察信息有效性的问题,。
一实施例中,该控制器还可以被配置成执行:
获取显示或隐藏相应操作臂的图像的操作指令,进而根据该操作指令显示或隐藏相应操作臂的图像。
具体而言,在获取到针对操作臂的显示图像的操作指令时,则对应在步骤S14中确定该操作臂对应的投影点。而在获取到针对操作臂的隐藏图像的操作指令时,则对应在步骤S14中无需确定该操作臂对应的投影点。这样相当于可以自定义的配置投影图像,以实现简化投影图像及去除干扰的子图像等目的。一实施例中,也可以通过调节虚拟相机的虚拟光圈(虚拟景深)来至少部分实现相类似的目的,示例性的,例如可以通过调节虚拟光圈虚化掉远离虚拟相机的操作臂而只清楚的对邻近虚拟相机的操作臂进行虚拟成像。
一实施例中,上述的图形化显示方法还可以包括:
在操作臂中第一操作臂达到事件的阈时,在投影图像中对第一操作臂的至少部分进行标识并显示于显示器。
其中,第一操作臂同样指的是一类而不限于某具体的一个操作臂。该阈是警告阈,该事件是要避免的情况。
一具体实施方式中,该警告阈基于第一操作臂与操作臂中第二操作臂之间的距离,例如,该警告阈可以是一个数值。要避免的情况是第一操作臂与第二操作臂之间的碰撞,例如,该要避免的情况可以是一个数值。第二操作臂同样指的是一类而不限于某具体的一个操作臂。例如,如图31所示,该方法可以通过如下步骤实现:
步骤S51,获取第一操作臂与第二操作臂之间的最小距离。
该步骤S51是实时进行的。
步骤S52,判断该最小距离与警告阈和要避免的情况之间的关系。
警告阈及要避免的情况均用数值表示,且在要避免的情况是第一操作臂与第二操作臂之间的碰撞的情况下,该警告阈代表的数值d lim大于要避免的情况代表的数值d min,即d lim>d min,第一操作臂与第二操作臂之间的最小距离用d表示。一实施例中,d min=0,它代表已碰撞。
在该步骤S52中,如果d>d lim,即最小距离未到达警告阈,则继续进行步骤S51;如果d min<d≤d lim,即最小距离到达警告阈未到达要避免的情况,进入步骤S53;如果d=d min,即最小距离越过警告阈到达要避免的情况,进入步骤S54。
步骤S53,对第一操作臂和第二操作臂的投影图像上的最小距离点进行第一标识。
如图32所示,操作臂包括相机臂31a以及手术臂31b和手术臂31c,且手术臂31b和手术臂31c之间的最小距离到达了警告阈,此时,在该步骤S53中,可以在手术臂31b(即第一操作臂)和手术臂31c(即第二操作臂)的投影图像中的最小距离点P1、P2用颜色或图形框例如圆圈等进行标识,如图33所示。而在重新检测到最小距离未到达警告阈时,通常,消除对第一操作 臂和第二操作臂的投影图像上的最小距离点的标识。而在重新检测到最小距离到达要避免的情况时,进入步骤S54,即进行第二标识。
此外,在进行第一标识的过程中,也即在满足d min<d≤d lim的条件时,可以随着最小距离的逐渐减小或增大,对第一标识作出改变。例如对颜色进行渐进变换,但可以不同于d=d min时的颜色;例如对第一标识进行频闪,但可以不同于d=d min时的频闪。
步骤S54,对第一操作臂和第二操作臂的投影图像上的最小距离点进行第二标识。
第一标识与第二标识不同。在该步骤S54中,可以例如强化对第一操作臂和第二操作臂的模型中的最小距离点P1、P2的标识如深化颜色;或者,可以对第一操作臂和第二操作臂的投影图像中的最小距离点的标识进行闪烁;或者,可以对第一操作臂和第二操作臂的投影图像中的最小距离点的标识做出类型改变如更改图形框的类型,如图34所示,图34中用虚线圆圈替换图33中所示的实线圆圈。而在重新检测到最小距离到达警告阈未到达要避免的情况时,进入步骤S53,即进行第一标识。
通过步骤S51~S54,有助于医生掌握操作臂之间的碰撞位置。
更具体地,如图35所示,上述步骤S51可以通过如下步骤实现:
步骤S511,根据第一操作臂和第二操作臂各自的运动学模型及结构特征构建相应第一操作臂和第二操作臂各自的几何模型。
在该步骤S511中,通常可以使用提及略大的基本几何体代替实际模型进行干涉分析,以提高后续的检测效率。第一操作臂及第二操作臂各自的几何模型可以简化为例如球体、圆柱体、长方体、凸多面体或两个以上的组合。
步骤S512,离散第一操作臂和第二操作臂各自的几何模型获得在参考坐标系下第一操作臂和第二操作臂各自的外部信息点集。
在该步骤S512中,将第一操作臂和第二操作臂各自的几何模型进行数据化处理得到两者各自的外部信息点集。
步骤S513,根据第一操作臂和第二操作臂各自的外部信息点集确定第一 操作臂和第二操作臂之间的最小距离。
在该步骤S513中,可以利用距离跟踪法来确定两者之间的最小距离,更具体的,可以通过遍历算法从第一操作臂和第二操作臂各自的外部信息点集中确定两者之间的最小距离。
更具体地,如图36所示,,上述步骤S53可以通过如下步骤实现:
步骤S531,确定第一操作臂和第二操作臂之间的最小距离对应的第一操作臂和第二操作臂的投影图像上的最小距离点。
步骤S532,对第一操作臂和第二操作臂的投影图像上的最小距离点进行第一标识。
一实施例中,如图37所示,在最小距离达到警告阈时,图形化显示方法还可以包括如下步骤:
步骤S533,根据第一操作臂和第二操作臂的投影图像上的最小距离点在参考坐标系下的位置确定碰撞方向。
步骤S534,在投影图像中对第一操作臂和第二操作臂之间的碰撞方向进行标识。
上述通过在投影图像对第一操作臂和第二操作臂之间最小距离点及碰撞方向的标识,例如可以用箭头矢量方向来标识碰撞方向,可以为医生提供视觉反馈以避免碰撞。
主操作台的手柄采用机械手柄。一实施例中,如图38所示,对应于上述步骤S53的情况,即最小距离到达警告阈未到达要避免的情况时,包括:
步骤S533,根据第一操作臂和第二操作臂的投影图像上的最小距离点在参考坐标系下的位置确定碰撞方向。
步骤S535,根据碰撞方向产生阻碍机械手柄在关联方向上移动的阻力。
这样可以在操作臂之间具有碰撞趋势时,给医生提供力觉反馈以避免碰撞。
具体而言,该机械手柄具有多个关节组件、与控制器耦接用于感应各关节组件状态的传感器及与控制器耦接用于驱动各关节组件运动的驱动电机。 根据碰撞方向产生阻碍机械手柄在关联方向上移动的阻力更具体地为:根据所述阻力使关联方向上的所述驱动电机产生反向力矩。
在最小距离介于警告阈和要避免的情况之间时,例如,反向力矩可以是恒定大小的;又例如,反向力矩的大小与最小距离的大小呈负相关。反向力矩的大小与最小距离的大小呈负相关的情况下,具体而言,最小距离逐渐减小时,增大反向力矩以产生更大的阻力;而最小距离逐渐增大时,减小反向力矩以产生较小的阻力,例如,该反向力矩的变化是线性的;例如,该反向力矩的变化是非线性的如阶梯式的。在最小距离到达要避免的情况时,产生的反向力矩可以至少最小为完全阻碍机械手柄在该碰撞方向上的移动,一实施例中,可以通过机械臂手柄各关节组件设置的力传感器检测医生施加的力或力矩,进而根据医生施加的力或力矩产生至少可抵消医生施加的力的反向力矩。一实施例中,也可以骤然直接将产生一个足够大的力使得一般力气的医生不足以移动机械手柄在碰撞方向上移动。
一实施例中,警告阈还可以基于第一操作臂中至少一个关节组件的运动范围,要避免的情况是第一操作臂中至少一个关节组件的运动范围的限制。同样的,可以在第一操作臂到达警告阈时,在第一显示窗口或第二显示窗口中对第一操作臂的模型至少相关的关节组件进行标识。此外,也可以在机械手柄处产生阻碍第一操作臂越过警告阈向要避免的情况运动的阻力。该阻力亦由关联的驱动电机产生反向力矩实现。
上述实施例的手术机器人还可以是多孔手术机器人。多孔手术机器人与单孔手术机器人之间的区别主要在从操作设备上。图39示意了一种多孔手术机器人的从操作设备。该多孔手术机器人中从操作设备的机械臂具有依次连接的主臂110、调整臂120及操纵器130。调整臂120及操纵器130均为两个以上,例如四个,主臂110远端具有定向平台,调整臂120近端均连接于定向平台,操纵器130近端连接于调整臂120远端。操纵器130用于可拆卸地连接操作臂150,操纵器130具有多个关节组件。在多孔手术机器人中,不同操作臂150通过不同的穿刺器插入患者体内,多孔手术机器人的操作臂150 相较于单孔手术机器人的操作臂31而言,一般具有较少的自由度,通常,操作臂150仅具有姿态自由度(即定向自由度),当然其姿态的变化一般也对位置产生影响,但因为影响较小通常可以被忽略。操作臂150的位置常由操纵器130辅助实现,由于操纵器130与操作臂150联动实现位姿变化,可以将这两者认为是操纵器组件,与单孔手术机器人中操作臂31相当。
一些实施例中,如图40所示,该图形化控制装置可以包括:处理器(processor)501、通信接口(Communications Interface)502、存储器(memory)503、以及通信总线504。
处理器501、通信接口502、以及存储器503通过通信总线504完成相互间的通信。
通信接口502,用于与其它设备比如各类传感器或电机或电磁阀或其它客户端或服务器等的网元通信。
处理器501,用于执行程序505,具体可以执行上述方法实施例中的相关步骤。
具体地,程序505可以包括程序代码,该程序代码包括计算机操作指令。
处理器505可能是中央处理器CPU,或者是特定集成电路ASIC(ApplicationSpecific Integrated Circuit),或者是被配置成实施本申请实施例的一个或多个集成电路,或者是图形处理器GPU(Graphics Processing Unit)。控制装置包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU,或者,一个或多个GPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个GPU。
存储器503,用于存放程序505。存储器503可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序505具体可以用于使得处理器501执行以下操作:获得操作臂的特征点序列及其对应的运动学模型;获取传感器感应的关节变量,并获取输入部选择的虚拟相机;根据运动学模型及关节变量确定特征点序列中各特征点 在虚拟相机的投影平面的投影点;有序的拟合连接各投影点生成操作臂的投影图像;在显示器中显示投影图像。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种手术机器人,其特征在于,包括:
    输入部;
    显示器;
    操作臂,包括多个关节及感应所述关节的关节变量的传感器,所述操作臂具有由有序排列的、用于关联相应所述关节的多个特征点构成的特征点序列;
    及控制器,所述控制器与所述输入部、显示器及所述传感器耦接,被配置成:
    获得所述操作臂的特征点序列及其对应的运动学模型;
    获取所述传感器感应的关节变量,并获取所述输入部选择的虚拟相机;
    根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点;
    有序的拟合连接各所述投影点生成所述操作臂的投影图像;
    在所述显示器中显示所述投影图像。
  2. 根据权利要求1所述的手术机器人,其特征在于:
    所述控制器在根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点的步骤中,被配置成:
    根据所述运动学模型及所述关节变量获得所述特征点序列中各特征点在参考坐标系下的第一位置;
    将各所述第一位置分别转换成在所述虚拟相机坐标系下的第二位置;
    获取所述虚拟相机的虚拟焦距并根据所述虚拟焦距确定所述虚拟相机的投影平面;
    根据所述虚拟焦距获得各所述第二位置在所述投影平面的投影点。
  3. 根据权利要求1所述的手术机器人,其特征在于:
    所述控制器在根据所述运动学模型及所述关节变量确定所述特征点序列 中各特征点在所述虚拟相机的投影平面的投影点的步骤中,被配置成:
    根据所述运动学模型及所述关节变量获得所述特征点序列中各特征点在参考坐标系下的第一位置;
    将各所述第一位置分别转换成在所述虚拟相机坐标系下的第二位置;
    获取各所述特征点对应的所述关节的轮廓信息;
    结合所述虚拟焦距及所述轮廓信息获得各所述第二位置在所述投影平面的投影点;
    所述控制器在有序的拟合连接各所述投影点生成所述操作臂的投影图像的步骤中,被配置成:
    结合所述轮廓信息、并根据各所述投影点对应的特征点在所述特征点序列中的顺序有序的连接各所述投影点进而生成所述操作臂的投影图像。
  4. 根据权利要求1所述的手术机器人,其特征在于:
    所述控制器在有序的拟合连接各所述投影点生成所述操作臂的投影图像的步骤中,被配置成:
    获取所述操作臂的类型,并根据所述类型匹配出所述操作臂的末端器械的图标;
    根据所述关节变量及所述运动学模型确定所述末端器械在所述虚拟相机的投影平面的位姿;
    根据所述末端器械在所述虚拟相机的投影平面的位姿对所述图标进行旋转及/或缩放处理;
    将经处理后的所述图标拼接于远端的所述投影点进而生成所述投影图像。
  5. 根据权利要求1所述的手术机器人,其特征在于:
    所述虚拟相机具有可选择的虚拟焦距及/或虚拟光圈,所述控制器在根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点的步骤中,被配置成:
    获取所述输入部选择的所述虚拟相机的虚拟焦距及/或虚拟光圈,结合所述虚拟焦距及/或虚拟光圈、所述运动学模型及所述关节变量确定所述特征点 序列中各特征点在所述虚拟相机的投影平面的投影点。
  6. 根据权利要求5所述的手术机器人,其特征在于:
    所述控制器被配置成在所述显示器中显示所述投影图像的步骤之前,执行:
    检测所述投影图像是否失真;
    在检测到所述投影图像失真时,增大所述虚拟相机的虚拟焦距并重新进入结合所述虚拟焦距及/或虚拟光圈、所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的投影平面的投影点的步骤;
    在检测到所述投影图像未失真时,进入在所述显示器中显示所述投影图像的步骤。
  7. 根据权利要求6所述的手术机器人,其特征在于:
    所述控制器被配置成:
    获取各所述投影点在参考坐标系的位置;
    获得所述投影点中落入所述投影平面的边缘区域或所述显示器中用于显示所述投影图像的显示窗口的边缘区域的第一投影点的数量;
    计算所述第一投影点的数量在所述投影点的总数量中的比值,并在所述比值达到阈值时,判断出所述投影图像失真。
  8. 根据权利要求1所述的手术机器人,其特征在于:
    所述操作臂包括具有图像末端器械的相机臂;
    所述控制器还被配置成:
    获取所述相机臂的图像末端器械的相机参数,并根据所述相机参数计算所述图像末端器械的可见区域,所述相机参数包括焦距和光圈;
    根据所述相机臂的关节变量及运动学模型确定所述图像末端器械在参考坐标系下的位姿;
    根据在参考坐标系下所述图像末端器械的位姿及所述虚拟相机的位姿之间的转换关系将所述图像末端器械的可见区域换算为所述虚拟相机的可见区域;
    计算所述虚拟相机的可见区域在所述投影平面上的边界线,并在所述显示器显示的所述投影图像中显示所述边界线。
  9. 根据权利要求1所述的手术机器人,其特征在于:
    所述操作臂包括具有图像末端器械的相机臂及具有操作末端器械的手术臂;
    所述控制器还被配置成在有序的拟合连接各所述投影点生成所述操作臂的投影图像的步骤中,执行:
    获取由所述相机臂的图像末端器械采集的手术区域的操作图像;
    从所述操作图像中识别出所述手术臂的特征部位;
    根据识别出的所述特征部位从所述特征点序列中匹配出关联的第一特征点;
    有序的拟合连接各所述投影点并标记所述投影点中关联所述第一特征点的第一投影点及与所述第一投影点连接的线段以生成所述操作臂的投影图像。
  10. 根据权利要求9所述的手术机器人,其特征在于:
    所述特征点序列还包括未匹配到的第二特征点,所述控制器在根据识别出的所述特征部位从多个所述特征点序列中匹配出关联的第一特征点的步骤后,被配置成:
    获取未匹配到的所述第二特征点;
    结合所述第二特征点对应的特征部位的轮廓信息、关节变量及运动学模型生成相应所述特征部位的图像模型;
    将所述图像模型转换成在所述图像末端器械坐标系下的补充图像;
    根据所述第二特征点与所述第一特征点在所述特征点序列中的顺序关系将所述补充图像拼接到所述第一特征点对应的特征部位的图像以在所述操作图像中形成所述操作臂完整的子图像;
    在所述显示器中显示具有所述操作臂完整的子图像的所述操作图像。
  11. 根据权利要求1所述的手术机器人,其特征在于:
    所述控制器还被配置成:
    获取所述操作臂的在第一方向上的最大运动范围;
    根据所述操作臂的关节变量及运动学模型计算所述操作臂在所述第一方向上的运动量;
    根据所述第一方向上的所述最大运动范围及所述运动量生成图标;
    在所述显示器中显示所述图标。
  12. 根据权利要求1所述的手术机器人,其特征在于:
    多个可供所述输入部选择的所述虚拟相机在参考坐标系下具有不同的位姿;
    所述虚拟相机在参考坐标系下的位姿基于所述操作臂在参考坐标系下的可达工作空间而确定。
  13. 根据权利要求12所述的手术机器人,其特征在于:
    所述虚拟相机在参考坐标系下的位姿基于所述操作臂在参考坐标系下的可达工作空间的并集空间而确定;
    所述虚拟相机在参考坐标系下的位置始终位于所述并集空间的外部,且所述虚拟相机在参考坐标系下的姿态始终朝向所述并集空间;
    所述虚拟相机具有可供选择的虚拟焦距,所述虚拟相机的位置位于第一区域以外,所述第一区域为最短所述虚拟焦距恰好能可见所述并集空间所确定的区域,或者,所述虚拟相机的位置位于第二区域以内,所述第二区域为最长所述虚拟焦距恰好能可见所述并集空间所确定的区域。
  14. 根据权利要求12所述的手术机器人,其特征在于:
    所述控制器被配置成在所述显示器的第一显示窗口中显示所述投影图像,并在所述第一显示窗口中生成多个可供选择的所述虚拟相机的图标;
    所述图标与所述投影图像的相对位置固定,随着所述投影图像视点的变换而变换,或者,所述图标设置成六个,分别对应从左侧、右侧、上侧、下侧、前侧及后侧对所述操作臂进行虚拟成像以生成相应视点下的所述投影图像,或者,所述图标展现为可转动的球体,所述图标被旋转到达的任意一个位置对应一个所述虚拟相机。
  15. 根据权利要求1所述的手术机器人,其特征在于:
    所述控制器被配置成在获取所述输入部选择的虚拟相机的步骤中,执行:
    获取所述输入部选择的虚拟相机及所述输入部输入的所述虚拟相机的至少两个目标位置;
    按照所述虚拟相机的预设运动速度并根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的每一目标位置下的投影平面的目标投影点;
    有序的拟合连接每一所述目标位置下的各所述目标投影点生成所述操作臂的目标投影图像;
    根据各所述目标投影图像生成动画;
    按照预设频率在所述显示器上播放所述动画。
  16. 根据权利要求1所述的手术机器人,其特征在于:
    所述控制器被配置成在获取所述输入部选择的虚拟相机的步骤中,执行:
    获取所述输入部输入的虚拟相机的运动轨迹;
    离散所述运动轨迹获得所述虚拟相机的各离散位置以作为目标位置;
    按照所述虚拟相机的预设运动速度并根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在所述虚拟相机的每一所述目标位置下的投影平面的目标投影点;
    有序的拟合连接每一目标位置下的各所述目标投影点生成所述操作臂的目标投影图像;
    根据各所述目标投影图像生成动画;
    按照预设频率在所述显示器上播放所述动画。
  17. 根据权利要求1所述的手术机器人,其特征在于:
    所述操作臂包括具有图像末端器械的相机臂;
    所述控制器被配置成:
    获取所述图像末端器械采集的手术区域的操作图像;
    在所述显示器中显示所述操作图像;
    在所述操作图像中悬浮的显示所述投影图像;
    所述控制器被配置成在所述操作图像中悬浮的显示所述投影图像的步骤中,执行:
    获取所述操作图像与所述投影图像的重叠区域,并获得所述操作图像在所述重叠区域的部分的第一图像属性;
    根据所述第一图像属性对所述投影图像在所述重叠区域的部分的第二图像属性进行调节。
  18. 根据权利要求1所述的手术机器人,其特征在于:
    所述控制器被配置成:在所述操作臂中的第一操作臂达到事件的阈时,在所述投影图像中对所述第一操作臂的至少部分进行标识并显示于所述显示器;
    所述阈是警告阈,所述事件是要避免的情况;其中,所述警告阈基于所述第一操作臂中至少一个关节的运动范围,所述要避免的情况是所述第一操作臂中至少一个关节的运动范围的限制;或者,所述警告阈基于所述第一操作臂与所述操纵器中第二操作臂之间的距离,所述要避免的情况是所述第第一操作臂与所述第二操作臂之间的碰撞。
  19. 一种手术机器人的图形化显示方法,其特征在于,所述手术机器人包括:
    输入部;
    显示器;
    操作臂,包括多个关节及感应所述关节的关节变量的传感器,多个所述关节构成定位自由度及/或定向自由度,所述操作臂具有由有序排列的特征点构成的特征点序列,所述特征点表征所述关节;
    所述控制方法包括如下步骤:
    获得所述操作臂的特征点序列及其对应的运动学模型;
    获取所述传感器感应的关节变量,并获取所述输入部选择的虚拟相机;
    根据所述运动学模型及所述关节变量确定所述特征点序列中各特征点在 所述虚拟相机的投影平面的投影点;
    有序的拟合连接各所述投影点生成所述操作臂的投影图像;
    在所述显示器中显示所述投影图像。
  20. 一种手术机器人的图形化控制装置,其特征在于,包括:
    存储器,用于存储计算机程序;
    及处理器,用于加载并执行所述计算机程序;
    其中,所述计算机程序被配置为由所述处理器加载并执行实现如权利要求19所述的图形化显示方法的步骤。
PCT/CN2020/133490 2020-10-08 2020-12-03 手术机器人及其图形化控制装置、图形化显示方法 WO2022073290A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20956605.8A EP4218652A1 (en) 2020-10-08 2020-12-03 Surgical robot, and graphical control device and graphic display method therefor
US18/030,919 US20240065781A1 (en) 2020-10-08 2020-12-03 Surgical robot, and graphical control device and graphic display method therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011068091.4A CN111991085B (zh) 2020-10-08 2020-10-08 手术机器人及其图形化控制装置、图形化显示方法
CN202011068091.4 2020-10-08

Publications (1)

Publication Number Publication Date
WO2022073290A1 true WO2022073290A1 (zh) 2022-04-14

Family

ID=73475081

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/133490 WO2022073290A1 (zh) 2020-10-08 2020-12-03 手术机器人及其图形化控制装置、图形化显示方法

Country Status (4)

Country Link
US (1) US20240065781A1 (zh)
EP (1) EP4218652A1 (zh)
CN (3) CN114601564B (zh)
WO (1) WO2022073290A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114886561A (zh) * 2022-05-24 2022-08-12 苏州铸正机器人有限公司 一种机器人手术路径规划装置及其规划方法
WO2023202291A1 (zh) * 2022-04-23 2023-10-26 深圳市精锋医疗科技股份有限公司 手术机器人系统及其控制装置

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114601564B (zh) * 2020-10-08 2023-08-22 深圳市精锋医疗科技股份有限公司 手术机器人及其图形化控制装置、图形化显示方法
CN112472298B (zh) * 2020-12-15 2022-06-24 深圳市精锋医疗科技股份有限公司 手术机器人及其控制装置、控制方法
CN112618020B (zh) * 2020-12-15 2022-06-21 深圳市精锋医疗科技股份有限公司 手术机器人及其控制方法、控制装置
CN114795493A (zh) * 2021-01-06 2022-07-29 深圳市精锋医疗科技股份有限公司 手术机器人及其引导手术臂移动的方法、控制装置
CN114652449A (zh) * 2021-01-06 2022-06-24 深圳市精锋医疗科技股份有限公司 手术机器人及其引导手术臂移动的方法、控制装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102243764A (zh) * 2010-05-13 2011-11-16 东软集团股份有限公司 运动特征点检测方法及装置
CN109419555A (zh) * 2017-08-28 2019-03-05 圣纳普医疗(巴巴多斯)公司 用于外科手术导航系统的定位臂
CN110236682A (zh) * 2014-03-17 2019-09-17 直观外科手术操作公司 用于对成像装置和输入控制装置重定中心的系统和方法
US20200237456A1 (en) * 2007-08-29 2020-07-30 Intuitive Surgical Operations, Inc. Medical robotic system with dynamically adjustable slave manipulator characteristics
CN111991085A (zh) * 2020-10-08 2020-11-27 深圳市精锋医疗科技有限公司 手术机器人及其图形化控制装置、图形化显示方法
CN111991084A (zh) * 2020-10-08 2020-11-27 深圳市精锋医疗科技有限公司 手术机器人及其虚拟成像控制方法、虚拟成像控制装置

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8073528B2 (en) * 2007-09-30 2011-12-06 Intuitive Surgical Operations, Inc. Tool tracking systems, methods and computer products for image guided surgery
EP3162318B1 (en) * 2005-10-20 2019-10-16 Intuitive Surgical Operations, Inc. Auxiliary image display and manipulation on a computer display in a medical robotic system
US10258425B2 (en) * 2008-06-27 2019-04-16 Intuitive Surgical Operations, Inc. Medical robotic system providing an auxiliary view of articulatable instruments extending out of a distal end of an entry guide
US9089256B2 (en) * 2008-06-27 2015-07-28 Intuitive Surgical Operations, Inc. Medical robotic system providing an auxiliary view including range of motion limitations for articulatable instruments extending out of a distal end of an entry guide
US8864652B2 (en) * 2008-06-27 2014-10-21 Intuitive Surgical Operations, Inc. Medical robotic system providing computer generated auxiliary views of a camera instrument for controlling the positioning and orienting of its tip
EP2996611B1 (en) * 2013-03-13 2019-06-26 Stryker Corporation Systems and software for establishing virtual constraint boundaries
US10888389B2 (en) * 2015-09-10 2021-01-12 Duke University Systems and methods for arbitrary viewpoint robotic manipulation and robotic surgical assistance
EP3424033A4 (en) * 2016-03-04 2019-12-18 Covidien LP VIRTUAL AND / OR AUGMENTED REALITY FOR PERFORMING PHYSICAL INTERACTION TRAINING WITH A SURGICAL ROBOT
CN106344151B (zh) * 2016-08-31 2019-05-03 北京市计算中心 一种手术定位系统
CN107995477A (zh) * 2016-10-26 2018-05-04 中联盛世文化(北京)有限公司 图像展示方法、客户端及系统、图像发送方法及服务器
US11589937B2 (en) * 2017-04-20 2023-02-28 Intuitive Surgical Operations, Inc. Systems and methods for constraining a virtual reality surgical system
AU2019214340A1 (en) * 2018-02-02 2020-09-24 Intellijoint Surgical Inc. Operating room remote monitoring
CN109223183A (zh) * 2018-09-30 2019-01-18 深圳市精锋医疗科技有限公司 手术机器人的启动方法、可读取存储器及手术机器人

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200237456A1 (en) * 2007-08-29 2020-07-30 Intuitive Surgical Operations, Inc. Medical robotic system with dynamically adjustable slave manipulator characteristics
CN102243764A (zh) * 2010-05-13 2011-11-16 东软集团股份有限公司 运动特征点检测方法及装置
CN110236682A (zh) * 2014-03-17 2019-09-17 直观外科手术操作公司 用于对成像装置和输入控制装置重定中心的系统和方法
CN109419555A (zh) * 2017-08-28 2019-03-05 圣纳普医疗(巴巴多斯)公司 用于外科手术导航系统的定位臂
CN111991085A (zh) * 2020-10-08 2020-11-27 深圳市精锋医疗科技有限公司 手术机器人及其图形化控制装置、图形化显示方法
CN111991084A (zh) * 2020-10-08 2020-11-27 深圳市精锋医疗科技有限公司 手术机器人及其虚拟成像控制方法、虚拟成像控制装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023202291A1 (zh) * 2022-04-23 2023-10-26 深圳市精锋医疗科技股份有限公司 手术机器人系统及其控制装置
CN114886561A (zh) * 2022-05-24 2022-08-12 苏州铸正机器人有限公司 一种机器人手术路径规划装置及其规划方法
CN114886561B (zh) * 2022-05-24 2024-01-30 苏州铸正机器人有限公司 一种机器人手术路径规划装置及其规划方法

Also Published As

Publication number Publication date
CN111991085B (zh) 2022-03-04
EP4218652A1 (en) 2023-08-02
US20240065781A1 (en) 2024-02-29
CN114831738A (zh) 2022-08-02
CN114601564B (zh) 2023-08-22
CN114601564A (zh) 2022-06-10
CN111991085A (zh) 2020-11-27

Similar Documents

Publication Publication Date Title
WO2022073290A1 (zh) 手术机器人及其图形化控制装置、图形化显示方法
US10660716B2 (en) Systems and methods for rendering onscreen identification of instruments in a teleoperational medical system
US11872006B2 (en) Systems and methods for onscreen identification of instruments in a teleoperational medical system
US11903665B2 (en) Systems and methods for offscreen indication of instruments in a teleoperational medical system
CN111991084B (zh) 手术机器人及其虚拟成像控制方法、虚拟成像控制装置
KR102117273B1 (ko) 수술 로봇 시스템 및 그 제어 방법
US20210369365A1 (en) Systems and methods for master/tool registration and control for intuitive motion
US11258964B2 (en) Synthesizing spatially-aware transitions between multiple camera viewpoints during minimally invasive surgery
CN113645919A (zh) 医疗臂系统、控制装置和控制方法
US20230031641A1 (en) Touchscreen user interface for interacting with a virtual model
JP6112689B1 (ja) 重畳画像表示システム
WO2022126995A1 (zh) 手术机器人及其控制方法、控制装置
EP3977406A1 (en) Composite medical imaging systems and methods
KR101114232B1 (ko) 수술 로봇 시스템 및 그 동작 제한 방법
KR20110047929A (ko) 수술 로봇 시스템 및 그 동작 제한 방법
WO2023150449A1 (en) Systems and methods for remote mentoring in a robot assisted medical system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20956605

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020956605

Country of ref document: EP

Effective date: 20230424

NENP Non-entry into the national phase

Ref country code: DE